ASP Virtual Implementation PDF
ASP Virtual Implementation PDF
Implementation Guide
3/18/11
All rights reserved. No part of this documentation shall be reproduced, stored in a retrieval system, or
transmitted by any means, electronic, mechanical, photocopying, recording, or otherwise, without the
prior written permission of Invensys Systems, Inc. No copyright or patent liability is assumed with respect
to the use of the information contained herein. Although every precaution has been taken in the
preparation of this documentation, the publisher and the author assume no responsibility for errors or
omissions. Neither is any liability assumed for damages resulting from the use of the information
contained herein.
The information in this documentation is subject to change without notice and does not represent a
commitment on the part of Invensys Systems, Inc. The software described in this documentation is
furnished under a license or nondisclosure agreement. This software may be used or copied only in
accordance with the terms of these agreements.
All terms mentioned in this documentation that are known to be trademarks or service marks have been
appropriately capitalized. Invensys Systems, Inc. cannot attest to the accuracy of this information. Use of
a term in this documentation should not be regarded as affecting the validity of any trademark or service
mark.
Alarm Logger, ActiveFactory, ArchestrA, Avantis, DBDump, DBLoad, DT Analyst, Factelligence,
FactoryFocus, FactoryOffice, FactorySuite, FactorySuite A2, InBatch, InControl, IndustrialRAD,
IndustrialSQL Server, InTouch, MaintenanceSuite, MuniSuite, QI Analyst, SCADAlarm, SCADASuite,
SuiteLink, SuiteVoyager, WindowMaker, WindowViewer, Wonderware, Wonderware Factelligence, and
Wonderware Logger are trademarks of Invensys plc, its subsidiaries and affiliates. All other brands may
be trademarks of their respective owners.
3
Contents
Welcome .................................................. 11
Documentation Conventions ......................................................... 11
Technical Support .......................................................................... 12
About DR ................................................................................. 26
Disaster Recovery Scenarios ................................................... 27
High Availability with Disaster Recovery ................................. 27
About HADR ............................................................................ 27
HADR Scenarios ...................................................................... 28
Planning the Virtualized System .................................................. 28
Assessing Your System Platform Installation ........................... 29
Sizing Recommendations for Virtualization .............................. 30
Cores and Memory .................................................................. 30
Storage ..................................................................................... 30
Networks .................................................................................. 31
Recommended Minimums for System Platform .................... 31
Defining High Availability .......................................................... 33
Defining Disaster Recovery ........................................................ 34
Defining High Availability and Disaster Recovery Combined . 35
Glossary................................................. 711
Index..................................................... 719
Welcome
Documentation Conventions
This documentation uses the following conventions:
Technical Support
Wonderware Technical Support offers a variety of support options to
answer any questions on Wonderware products and their
implementation.
Before you contact Technical Support, refer to the relevant section(s)
in this documentation for a possible solution to the problem. If you
need to contact technical support for help, have the following
information ready:
• The type and version of the operating system you are using.
• Details of how to recreate the problem.
• The exact wording of the error messages you saw.
• Any relevant output listing from the Log Viewer or any other
diagnostic applications.
• Details of what you did to try to solve the problem(s) and your
results.
• If known, the Wonderware Technical Support case number
assigned to your problem, if this is an ongoing problem.
Chapter 1
Understanding Virtualization
Virtualization is the creation of an abstracted or simulated—virtual,
rather than actual—version of something, such as an operating
system, server, network resource, or storage device. Virtualization
technology abstracts the hardware from the software, extending the
life cycle of a software platform.
In virtualization, a single piece of hardware, such as a server, hosts
and coordinates multiple guest operating systems. No guest operating
system is aware that it is sharing resources and running on a layer of
virtualization software rather than directly on the host hardware.
Each guest operating system appears as a complete, hardware-based
OS to the applications running on it.
Definitions
This implementation guide assumes that you and your organization
have done the necessary research and analysis and have made the
decision to implement ArchestrA System Platform in a virtualized
environment that will replace the need for physical computers and
instead run them in a virtualized environment. Such an environment
can take advantage of advanced virtualization features including High
Availability and Disaster Recovery. In that context, we’ll define the
terms as follows:
• Virtualization can be defined as creating a virtual, rather than
real, version of ArchestrA System Platform or one of its
components, including servers, nodes, databases, storage
devices, and network resources.
• High Availability (HA) can be defined as a primarily automated
ArchestrA System Platform design and associated services
implementation which ensures that a pre-defined level of
operational performance will be met during a specified,
limited time frame.
• Disaster Recovery (DR) can be defined as the organizational,
hardware and software preparations for ArchestrA System
Platform recovery or continuation of critical System
Platform infrastructure after a natural or human-induced
disaster.
While these definitions are general and allow for a variety of HA and
DR designs, this implementation guide focuses on viritualization, an
indispensible element in creating the redundancy necessary for HA
and DR solutions.
The virtualized environment described in this guide is based on
Microsoft Hyper-V technology incorporated in the Windows Server
2008 R2 operating system.
Types of Virtualization
There are eight types of virtualization:
Hardware A software execution environment
separated from underlying hardware
resources. Includes hardware-assisted
virtualization, full and partial
virtualization and paravirtualization.
Hyper-V Architecture
Hyper-V implements Type 1 hypervisor virtualization, in which the
hypervisor primarily is responsible for managing the physical CPU
and memory resources among the virtual machines. This basic
architecture is illustrated in the following diagram.
Virtual processors 4
Memory 64 GB
Virtual floppy 1
drives
Logical processors 64
Virtual processors 8
per logical
processor
Memory 1 TB
Levels of Availability
When planning a virtualization implementation—for High
Availability, Disaster Recovery, Fault Tolerance, and Redundancy—it
is helpful to consider levels or degrees of redundancy and availability,
described in the following table.
High Availability
About HA
High Availability refers to the availability of resources in a computer
system following the failure or shutdown of one or more components of
that system.
At one end of the spectrum, traditional HA has been achieved through
custom-designed and redundant hardware. This solution produces
High Availability, but has proven to be very expensive.
At the other end of the spectrum are software solutions designed to
function with off-the-shelf hardware. This type of solution typically
results in significant cost reduction, and has proven to survive single
points of failure in the system.
Disaster Recovery
About DR
Disaster Recovery planning typically involves policies, processes, and
planning at the enterprise level, which is well outside the scope of this
implementation guide.
DR, at its most basic, is all about data protection. The most common
strategies for data protection include the following:
• Backups made to tape and sent off-site at regular intervals,
typically daily.
• For the hardware and network failure scenarios, the virtual
images restart following failover
• For the hardware and network failure scenarios, the virtual
images restart following failover
• Backups made to disk on-site, automatically copied to an off-site
disk, or made directly to an off-site disk.
• Replication of data to an off-site location, making use of storage
area network (SAN) technology. This strategy eliminates the need
to restore the data. Only the systems need to be restored or synced.
• High availability systems which replicate both data and system
off-site. This strategy enables continuous access to systems and
data.
About HADR
The goal of a High Availability and Disaster Recovery (HADR) solution
is to provide a means to shift data processing and retrieval to a
standby system in the event of a primary system failure.
Typically, HA and DR are considered as individual architectures. HA
and DR combined treat these concepts as a continuum. If your system
is geographically distributed, for example, HA combined with DR can
make it both highly available and quickly able to recover from a
disaster.
HADR Scenarios
The basic HADR architecture implementation described in this guide
builds on both the HA and DR architectures adding an offline system
plus storage at "Site A". This creates a complete basic HA
implementation at "Site A" plus a DR implementation at "Site B" when
combined with distributed storage.
Once you have diagramed your topology, you can build a detailed
inventory of the system hardware and software.
Microsoft offers tools to assist with virtualization assessment and
planning.
• Microsoft Assessment and Planning Toolkit (MAP)
The MAP toolkit is useful for a variety of migration projects,
including virtualization. The component package for this
automated tool is available for download from Microsoft at the
following address:
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=67
240b76-3148-4e49-943d-4d9ea7f77730&displaylang=en
• Infrastructure Planning and Design Guides for Virtualization
(IPD)
The IPD Guides from Microsoft provide a series of guides
specifically geared to assist with virtualization planning. They are
available for download from Microsoft at the following address:
http://technet.microsoft.com/en-us/solutionaccelerators/ee395429
Hyper-Threading
Hyper-Threading Technology can be used to extend the amount of
cores, but it does impact performance. An 8-core CPU will perform
better than a 4-core CPU that is Hyper-Threading.
Storage
It is always important to plan for proper Storage. A best practice is to
dedicate a local drive or virtual drive on a Logical Unit Number (LUN)
to each of the VMs being hosted. We recommend SATA or higher
interfaces.
Networks
Networking is as important as any other component for the overall
performance of the system.
Small Systems
GR Node 2 2 100
Historian 2 2 250
Application 2 2 100
Server
Information 2 2 100
Servers
Historian 2 2 100
Clients
GR Node 4 4 250
Historian 4 4 500
Information 4 4 100
Server
Historian 2 4 100
Clients
After installation of the server, you will start from scratch, or you can
use the existing installation. A free tool on Microsoft TechNet called
Disk2vhd supports extracting a physical machine to a VHD file. The
Disk2vhd tool is available for download from Microsoft at the following
address:
http://technet.microsoft.com/en-us/sysinternals/ee656415
Another tool you can use to migrate physical machines into to a virtual
environment is VMM2008. This tool is availble for purchase from
Microsoft. For more information, see the following Microsoft address:
http://www.microsoft.com/systemcenter/en/us/virtual-machine-manag
er.aspx
Chapter 2
• On the host servers disable all the network cards which are not
utilized by the System Platform Environment. This is to avoid any
confusion during the network selections while setting up the
cluster.
• Ensure the Virtual Networks created in Hyper-V Manager have
the same name across all the nodes which are participating in the
Cluster. Otherwise, migration/failover of Hyper-V virtual
machines will fail.
Historian
• In case of Live and Quick migration of Historian, you may
notice that Historian logs values with quality detail 448 and
there may be values logged twice with same timestamps. This
is because the suspended Historian VM starts on the other
cluster node with the system time it was suspended at before
the migration. As a result, some of the data points it is
receiving with the current time seem to be in the future to the
Historian. This results in Historian modifying the timestamps
to its system time and updating the QD to 448. This happens
until the system time of the Historian node catches up with the
real current time using the TimeSync utility, after which the
problem goes away. So, it is recommended to stop the historian
before the migration and restart it after the VM is migrated
and its system time is synced up with the current time.
• Live and Quick migration of Historian should not be done when
the block change over is in progress on the Historian node.
• If a failover happens (for example, due to a network disconnect
on the source Host Virtualization Server) while the Historian
status is still “Starting”, the Historian node fails over to the
target Host Virtualization Server. In the target host, Historian
fails to start. To recover from this state, kill the Historian
services that failed to start and then start the Historian by
launching the SMC.
InTouch
• Ensure that InTouch Window Viewer is added to the Start Up
programs so that the view is started automatically when the
virtual machine reboots.
Application Server
• If a failover happens (for example, due to a network disconnect
on the source Host Virtualization Server) while the Galaxy
Migration is in progress, the GR node fails over to the target
Host Virtualization Server. In the target host, on opening the
IDE for the galaxy, the templates do not appear in the
Template toolbox and in Graphic toolbox. To recover from this
state, delete the galaxy and create new Galaxy. Initiate the
migration process once again.
• If a failover happens (for example, due to an abrupt power-off
on the source Host Virtualization Server) while a platform
deploy is in progress, the Platform node fails over to the target
Host Virtualization Server. In the target host, some objects will
be in deployed state and the rest will be in undeployed state. To
recover from this state, redeploy the whole Platform once
again.
• If a failover happens (for example, due to an abrupt power-off
on the source Host Virtualization Server) while a platform
undeploy is in progress, the Platform node fails over to the
target Host Virtualization Server. In the target host, some
objects will be in undeployed state and the rest will be in
deployed state. To recover from this state, undeploy the whole
Platform once again.
Note: In the event that the private network becomes disabled, you
may need to add a script to enable a failover. For more information, see
"Failover of the Virtual Machine if the Domain/ Private Network is
disabled" on page 80
Hyper-V Hosts
Memory 12GB
Note: For the Hyper-V Host to function optimally, the server should
have the same processor, RAM, storage and service pack level.
Preferably the servers should be purchased in pairs to avoid hardware
discrepancies. Though the differences are supported, it will impact the
performance during failovers.
Virtual Machines
Using the above Specified Hyper-V Host, three virtual machines can
be created with the following Configuration.
Memory 4 GB
Storage 80 GB
Memory 2 GB
Storage 40 GB
Memory 4 GB
Storage 40 GB
Network Requirements
For this high availability architecture, you can use two physical
network cards that need to be installed on a host computer and
configured, to separate the domain network and the process network.
Note: Repeat the above procedure to include all the other nodes that
are part of the Cluster configuration process.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
Note: You can either enter the server name or click Browse and select
the relevant server name.
Note: You can add one or more server names. To remove a server
from the Selected servers list, select the server and click Remove.
Note: You can add one or more server names. To remove a server
from the Selected servers list, select the server and click Remove.
5 Click the Run only tests I select option to skip storage validation
process, and then click Next. The Test Selection area appears.
Note: Click the Run all tests (recommended) option to validate the
default selection of tests.
6 Clear the Storage check box, and then click Next. The Summary
screen appears.
7 Click View Report to view the test results or click Finish to close
the Validate a Configuration Wizard window.
A warning message appears indicating that all tests have not been
run. This usually happens in a multisite cluster where storage tests
are skipped. You can proceed if there is no other error message. If the
report indicates any other error, you need to fix the problem and rerun
the tests before you continue. You can view the results of the tests
after you close the wizard in SystemRoot\Cluster\Reports\Validation
Report date and time.html where SystemRoot is the folder in which the
operating system is installed (for example, C:\Windows).
To know more about cluster validation tests, click More about cluster
validation tests on Validate a Configuration Wizard window.
Creating a Cluster
To create a cluster, you need to run the Create Cluster wizard.
To create a cluster
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
4 View the instructions and click Next. The Validation Warning area
appears.
a In the Enter server name field, enter the relevant server name
and click Add. The server name gets added in the Selected
servers box.
Note: You can either enter the server name or click Browse to select
the relevant server name.
7 In the Cluster Name field, enter the name of the cluster and click
Next. The Confirmation area appears.
8 Click Next. The cluster is created and the Summary area appears.
9 Click View Report to view the cluster report created by the wizard
or click Finish to close the Create Cluster Wizard window.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
5 Check the summary pane of the networks and ensure Cluster Use
is disabled for the network which is not required for cluster
communication.
Note: Repeat the above process if more than two networks, which are
not required for cluster communication, are involved in the Cluster
Setup.
To create and secure a file share for the node and file share
majority quorum
1 Create a new folder on the system that will host the share
directory.
2 Right-click the folder that you created and click Properties. The
Quorum Properties window for the folder that you created
appears.
3 Click the Sharing tab, and then click Advanced Sharing. The
Advanced Sharing window appears.
4 Select the Share this folder check box and click Permissions. The
Permissions for Quorum window appears.
6 In the Enter the object name to select box, enter the two node
names used for the cluster in the small node configuration and
click OK. The node names are added and the Permissions for
Quorum window appears.
7 Select the Full Control, Change, and Read check boxes and click
OK. The Properties window appears.
8 Click Ok. The folder is shared and can be used to create virtual
machines.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 Right-click the name of the cluster you created and click More
Actions. Click Configure Cluster Quorum Settings. The Configure
Cluster Quorum Wizard window appears.
3 View the instructions on the wizard and click Next. The Select
Quorum Configuration area appears.
Note: The Before you Begin screen appears the first time you run the
wizard. You can hide this screen on subsequent uses of the wizard.
Note: Click the Node Majority option if the cluster is configured for
node majority or a single quorum resource. Click the Node and Disk
Majority option if the number of nodes is even and not part of a
multisite cluster. Click the No Majority: Disk Only option if the disk
being used is only for the quorum.
Note: You can either enter the server name or click Browse to select
the relevant shared path.
6 The details you have selected are displayed. To confirm the details
click Next. The Summary area appears and the configuration
details of the quorum settings are displayed.
After you configure the cluster quorum, you must validate the cluster.
For more information, refer to
http://technet.microsoft.com/en-us/library/bb676379(EXCHG.80).aspx.
Configuring Storage
For a smaller virtualization environment, storage is one of the central
barriers to implementing a good virtualization strategy. But with
Hyper-V, VM storage is kept on a Windows file system. Users can put
VMs on any file system that a Hyper-V server can access. As a result,
HA can be built into the virtualization platform and storage for the
virtual machines. This configuration can accommodate a host failure
by making storage accessible to all Hyper-V hosts so that any host can
run VMs from the same path on the shared folder. The back-end part
of this storage can be a local or storage area network, iSCSI or
whatever is available to fit the implementation.
For this architecture, the Shared Folder is used. The process of how to
use the Shared Folder in the Failover Cluster for the High Availability
is described in the section "Configuring Virtual Machines" on
page 171.
System Processor
Configuring Hyper-V
Microsoft Hyper-V Server 2008 R2 helps in creating a virtual
environment that improves server utilization. It enhances patching,
provisioning, management, support tools, processes, and skills.
Microsoft Hyper-V Server 2008 R2 provides live migration, cluster
shared volume support, expanded processor, and memory support for
host systems.
Hyper-V is available in x64-based versions of Windows Server 2008 R2
operating system, specifically the x64-based versions of Windows
Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and
Windows Server 2008 Datacenter.
The following are the prerequisites to set up Hyper-V:
• x64-based processor
• Hardware-assisted virtualization
• Hardware Data Execution Prevention (DEP)
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 In the Roles pane, under Roles Summary area, click Add Roles.
The Add Roles Wizard window appears.
Note: You can also right-click Roles, and then click Add Roles Wizard
to open the Add Roles Wizard window.
3 View the instructions on the wizard and click Next. The Select
Server Roles area appears.
4 Select the Hyper-V check box and click Next. The Create Virtual
Networks area appears.
5 Select the check box next to the required network adapter to make
the connection available to virtual machines. Click Next. The
Confirm Installation Selections area appears.
3 View the instructions in the Before You Begin area and click Next.
The Specify Name and Location area appears.
c In the Location box, enter the location where you want to store
the virtual machine.
Note: You can either enter the path to the filename or click Browse to
select the relevant server name.
6 Select the network to be used for the virtual machine and click
Next. The Connect Virtual Hard Disk area appears.
7 Click the Create a virtual hard disk option and then do the
following:
a In the Name box, enter the name of the virtual machine.
b In the Location box, enter the location of the virtual machine.
Note: You can either enter the location or click Browse to select the
location of the virtual machine and click Next.
c In the Size box, enter the size of the virtual machine and then
click Next. The Installation Options area appears.
Note: You need to click either the Use an existing virtual hard disk
or the Attach a virtual hard disk later option, only if you are using an
existing virtual hard disk, or you want to attach a virtual disk later.
8 Click the Install an operating system later option and click Next.
The Completing the New Virtual Machine Window area appears.
9 Click Finish. The virtual machine is created with the details you
provided. As we have started this process from the Failover
Cluster Manager, after completing the process of creating a virtual
machine, the High Availability Wizard window appears.
10 Click View Report to view the report or click Finish to close the
High Availability Wizard window.
Note: You can use the above procedure to create multiple virtual
machines with appropriate names and configuration.
GR 10000 2500
Topic 1 10000 1 1
Topic 7 600000 40 16
Topic 39 1000 4 4
• Engine 1 : 9
• Engine 2 : 13
• Engine 3 : 13
• Engine 4 : 225
• Engine 5 : 118
• Engine 6 : 118
• Engine 7 : 195
• Engine 8 : 225
• Engine 9 : 102
• Engine 10: 2
• Engine 11: 3
• Engine 12: 21
• Engine 13: 1
• Engine 14: 1
The total number of DI objects is 6.
Scenario Observation
Live Migration
Primary
Node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Quick Migration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Shut down
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
WIS Node InTouch 415 sec + N/A 556 sec + Time taken to
time taken by run viewer)
the user to
start the
InTouchView
Primary
node Products RTO RPO
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO (sec) RPO
Data Loss
Tags Duration
Primary
node Products RTO (sec) RPO
Data Loss
Tags Duration
Primary
node Products RTO (sec) RPO
Data Loss
Tags Duration
Scenario Observation
In the Historian Trend below the, the AppEngine node tags are:
Tag1: SystemTimeSec tag of Historian local tag
Tag2: I/O Tag (P31541.I10) getting data from DASSI of GR node
Tag3: InTouch Tag (_PP$Second) from WIS node historizing to GR
node
Tag4: Script Tag (Sinewaveval_002.SineWaveValue) of AppEngine
node
The IAS tag (script) does not have any data loss as the data is stored in
the SF folder in the AppEngine node. This data is later forwarded after
Live Migration.
Observations
GR is Platform1 in the Galaxy deployed. During Live Migration of the
GR, it is obvious that there will be an instance during Live Migration
when the following occurs:
• GR, Historian will not be able to connect to the other deployed
nodes (Platforms 2, 3, and 4).
• Other nodes will not be able to connect to GR (Platform1)
• Some data sent from GR will be discarded till the TimeSync utility
is executed and system time of GR is synchronized.
• AppEngine node is in the Store Forward mode.
• Historian Client trend will not be able to connect to Historian, so
the warning message is expected.
• Historian machine’s time is not synchronized during Live
Migration of Historian, so the “Attempt to store values in the
future” message is expected.
After Live Migration, Historian’s time needs to be synchronized.
Therefore the server shifting warning message is expected.
GR node
368064482/7/20117:39:13 PM23962556WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
368064492/7/20117:39:13 PM23962556WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
368064502/7/20117:39:13 PM23962556WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
AppEngine node
220263132/7/20117:39:08 PM25642580WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
220264202/7/20117:39:19 PM27203000WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
220264462/7/20117:39:50 PM27203000WarningScriptRuntime
Insert.InsertValueWT: Script timed out.
WIS node
17174702/7/20117:39:07 PM37683684WarningNmxSvcPlatform 1 exceed
maximum heartbeats timeout of 8000 ms
Observations
AppEngine is Platform2 in the Galaxy deployed. During Live
Migration of the AppEngine, it is obvious that there will be an
instance during Live Migration when the following occurs:
• AppEngine will not be able to connect to the other deployed nodes
(Platforms 1, 3, and 4).
• Other nodes will not be able to connect to AppEngine (Platform2)
• Some data sent from AppEngine will be discarded till the
TimeSync utility is executed and system time of AppEngine is
synchronized. So Historian is bound to discard data from the
AppEngine node.
As a result, you see the following warnings on each of the VM nodes.
GR node
368058502/7/20115:19:02 PM23962556WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
AppEngine
220261792/7/20115:19:05 PM25642580WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
220261802/7/20115:19:05 PM25642580WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
220261812/7/20115:19:05 PM25642580WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
WIS node
17134351/27/20117:52:59 PM40642196WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Observations
WIS node is Platform4 in the Galaxy deployed. During Live Migration
of the WIS node, it is obvious that there will be an instance during
Live Migration when the following occurs:
• Other nodes will not be able to connect to WIS (Platform4)
• Historian Client trend will not be able to connect to Historian, so
warning is expected.
AppEngine node
220255672/7/20113:07:40 PM25082696WarningScriptRuntime
Insert.InsertValueWT: Script timed out.
220255682/7/20113:10:24 PM25082696WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
220255692/7/20113:11:42 PM25082696WarningScriptRuntime
Insert.InsertValueWT: Script timed out.
WIS node
None
These are captured in the RPO table for Quick Migration of the GR
node.
The Industrial Application Server ( IAS) tag (script) does not have any
data loss as the data is stored in the SF folder in the AppEngine node.
This data is later forwarded after Quick Migration.
Observations
GR is Platform1 in the Galaxy deployed. During Quick Migration of
the GR, it is obvious that there will be an instance during Quick
Migration when the following occurs:
• GR, Historian will not be able to connect to the other deployed
nodes (Platforms 2, 3, and 4).
• Other nodes will not be able to connect to GR (Platform1).
• Some data sent from GR will be discarded till the TimeSync utility
is executed and system time of GR is synchronized.
• AppEngine node is in the Store Forward mode.
• Historian Client trend will not be able to connect to Historian, so
warning message is expected.
• Historian machine's time is not synchronized during Live
• Migration of the Historian, so the "Attempt to store values in the
future" message is expected.
• After Live Migration, Historian's time needs to be synchronized.
Therefore, the server shifting warning message is expected.
After the Quick Migration of GR, Historian Node, the stored data is
forwarded from AppEngine node.
As a result, you see the following warnings on each of the VM nodes.
GR node
372973352/15/20115:34:19 PM7202748WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
372973362/15/20115:34:19 PM7202748WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
372973902/15/20115:34:25 PM12844968WarningDDESuiteLinkClient
CTopic::RemoveItems didn't get executed...: connection handle
m_hConn=0x00000000, connection status m_bConnected=false,
host handle m_pHost=0x0038eb80
AppEngine node
220482942/15/20115:30:33 PM26482672WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
220484332/15/20115:34:28 PM42764372WarningScriptRuntime
Insert.InsertValueWT: Script timed out.
WIS node
17199092/15/20115:30:33 PM29443380WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
17199232/15/20115:31:29 PM49323684WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
Observations
AppEngine is Platform2 in the Galaxy deployed. During Quick
Migration of the AppEngine, it is obvious that there will be an
instance during Quick Migration when
• AppEngine will not be able to connect to the other deployed nodes
(Platforms 1, 3, and 4).
• Other nodes will not be able to connect to AppEngine (Platform2)
• Some data sent from AppEngine will be discarded till the
TimeSync utility is executed and system time of AppEngine is
synchronized. So Historian is bound to discard data from
AppEngine node.
• AppEngine is not synchronized during quick migration and a
message “Values in the past did not fit within the realtime
window” is expected on AppEngine.
As a result, you see the following warnings on each of the VM nodes.
GR node
413214162/16/20113:58:23 PM23962780WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms
AppEngine
220563372/16/20114:00:28 PM25162532WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
220563382/16/20114:00:28 PM25162532WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
220563392/16/20114:00:28 PM25162532WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
220564742/16/20114:01:06 PM31563160WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
17204112/16/20113:58:21 PM36523428WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Observations
WIS node is Platform4 in the Galaxy deployed. During Quick
Migration of the WIS node, it is obvious that there will be an instance
during Quick Migration when
• WIS will not be able to connect to the other deployed nodes
(Platforms 1, 2, and 3).
• Other nodes will not be able to connect to WIS (Platform4)
• Historian Client trend will not be able to connect to Historian, so
warning is expected.
• Historian machine’s time is not synchronized during Quick
Migration of WIS node, so “Attempt to store values in the future” is
expected.
As a result, you see the following warnings on each of the VM nodes.
GR node
413215532/16/20114:19:13 PM23962780WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
AppEngine node
220566142/16/20114:19:13 PM25162532WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
WIS node
17204172/16/20114:21:30 PM36523428WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
17204182/16/20114:21:30 PM36523428WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms
Observations
The following warning are observed.
GR node
372976552/15/20116:09:39 PM7202748WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
372976562/15/20116:09:46 PM7202748WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
372976572/15/20116:17:21 PM7202748WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
372977192/15/20116:17:22 PM12844968WarningScanGroupRuntime2
Can't convert Var data type to MxDataType
AppEngine
220485882/15/20116:13:36 PM26482672WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
220485892/15/20116:13:36 PM26482672WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
220485902/15/20116:13:36 PM26482672WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
220485912/15/20116:13:36 PM26482744WarningMessageChannelSGR
address was not resolved. Error = 10022
220486942/15/20116:14:35 PM42764372WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
17199362/15/20116:15:44 PM29443380WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
17199372/15/20116:15:44 PM29443380WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
17199442/15/20116:16:36 PM49324412WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
Observations
The following warning are observed.
GR node
The following warnings are observed on the GR node during Power off
of Host Server.
369044912/14/201111:48:53 PM3040892WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
369044922/14/201111:48:53 PM3040892WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
369044932/14/201111:48:53 PM3040892WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
369045682/14/201111:49:00 PM42964300WarningScriptRuntime
DDESuiteLinkClient_Buffered.reconnect: Script timed out.
369045692/14/201111:49:00 PM42964300WarningDDESuiteLinkClient
CTopic::RemoveItems didn't get executed...: connection handle
m_hConn=0x00000000, connection status m_bConnected=false,
host handle m_pHost=0x01d6eec8
AppEngine node
220418642/14/201111:47:02 PM24562476WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
220418652/14/201111:47:02 PM24562476WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
220418662/14/201111:47:02 PM24562476WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
220419762/14/201111:49:41 PM15842492WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
17187672/14/201111:48:49 PM24443064WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
17187682/14/201111:48:49 PM24443064WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms
Observations
GR node
The following warnings are observed on the GR node during Power off
of Host Server.
369049522/15/201112:13:55 AM22722288WarningScriptRuntime
GR.privatembytes: Script timed out.
AppEngine node
220423732/15/201112:15:17 AM21002104WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.
220425602/15/201112:16:10 AM24964832WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...
220425612/15/201112:16:11 AM24964832WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...
WIS node
None
Observations
During network disconnect of host server all the nodes will be moved
to other Host server and all the VM's in the Host Server will get
restarted.
GR node
The following warnings are observed on the GR node during network
disconnect of host server.
372991772/15/20117:07:43 PM18522652WarningScriptRuntime
GR.privatembytes: Script timed out.
373063772/15/20117:08:26 PM16683064WarningaahStoreSvcAttempt to
store value prior to disconnect time. Value discarded -
373065242/15/20117:12:38 PM16683064WarningaahStoreSvcAttempt to
store value prior to disconnect time. Value discarded -
possible loss of data (SGR_mdas, 4504, 2011/02/15
13:38:25.428, 2011/02/15 13:38:24.868) [SGR; pipeserver.cpp;
958; 3031]
AppEngine node
220493472/15/20117:07:15 PM23482352WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.
220495942/15/20117:07:59 PM29765016WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...
220495952/15/20117:08:00 PM29764200WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...
220497602/15/20117:08:55 PM27281088WarningInTouchProxyFailed to
reconnecnt to data source
220498382/15/20117:13:19 PM29762980WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
None
GR node
413126932/16/20113:08:02 PM15202136WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (1024, 3, 0,
52004273, 2137401, 10) [SGR; deltastore.cpp; 2669]
413127142/16/20113:10:04 PM15202136WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (1024, 3, 0,
52004273, 2200899, 11) [SGR; deltastore.cpp; 2669]
413127432/16/20113:10:47 PM15202136WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (SGR_mdas, 4500, 2011/02/16 09:40:14.558, 2011/02/16
09:40:46.643) [SGR; pipeserver.cpp; 2388]
AppEngine
220548592/16/20113:06:44 PM22364432WarningaaEngine0:9E8 Values
in the past did not fit within the realtime window; discarding
data (2790, 2011/02/16 09:36:13.631, 2011/02/16 09:36:44.035)
[aahMDASSF.cpp; 3709; 1]
220550432/16/20113:10:27 PM22321496WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node:
None
From the above trend it can be noticed that there are frequent data
losses for all types tags IO and Script generated from GR and
AppEngine nodes.
From the above trend it can be noticed that there is data loss for
Script tags of AppEngine node.
Note: In the event that the private network becomes disabled, you
may need to add a script to enable a failover. For more information, see
"Failover of the Virtual Machine if the Domain/ Private Network is
disabled" on page 80
Hyper-V Host
Memory 48 GB
Note: For the Hyper-V Host to function optimally, the server should
have the same processor, RAM, storage and service pack level.
Preferably the servers should be purchased in pairs to avoid hardware
discrepancies. Though the differences are supported, it will impact the
performance during failovers.
Virtual Machines
Using the Hyper-V host specified above, seven virtual machines can be
created in the environment with the configuration given below.
Memory 8 GB
Storage 200 GB
Memory 8 GB
Storage 100 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Network Requirements
For this high availability architecture, you can use two physical
network cards that need to be installed on a host computer and
configured to separate the domain network and the process network.
This setup requires a minimum of two host servers and one storage
server shared across two hosts. Another independent node is used for
configuring the quorum. For more information on configuring the
quorum, refer to "Configure Cluster Quorum Settings" on page 154.
The following procedures help you install and configure a failover
cluster that has two nodes to set up on a medium scale virtualization
high availability environment.
Note: Repeat the above procedure to include all the other nodes that
will be part of the Cluster configuration process.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
4 View the instructions on the wizard and click Next. The Select
Servers or a Cluster area appears.
Note: You can either enter the server name or click Browse to select
the relevant server name.
Note: You can add one or more server names. To remove a server
from the Selected servers list, select the server and click Remove.
6 Click the Run only tests I select option to skip storage validation
process, and then click Next. The Test Selection screen appears.
Note: Click the Run all tests (recommended) option to validate the
default selection of tests.
7 Clear the Storage check box, and then click Next. The Summary
screen appears.
8 Click View Report to view the test results or click Finish to close
the Validate a Configuration Wizard window.
A warning message appears indicating that all tests have not been
run. This usually happens in a multisite cluster where storage tests
are skipped. You can proceed if there is no other error message. If the
report indicates any other error, you need to fix the problem and rerun
the tests before you continue. You can view the results of the tests
after you close the wizard in SystemRoot\Cluster\Reports\Validation
Report date and time.html where SystemRoot is the folder in which
the operating system is installed (for example, C:\Windows).
To know more about cluster validation tests, click More about cluster
validation tests on Validate a Configuration Wizard window.
Creating a Cluster
To create a cluster, you need to run the Create Cluster wizard.
To create a cluster
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
4 View the instructions and click Next. The Validation Warning area
appears.
Note: You can either enter the server name or click Browse to select
the relevant server name.
7 In the Cluster Name box, enter the name of the cluster and click
Next. The Confirmation area appears.
8 Click Next. The cluster is created and the Summary area appears.
9 Click View Report to view the cluster report created by the wizard
or click Finish to close the Create Cluster Wizard window
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
5 Check the summary pane of the networks and ensure Cluster Use
is disabled for the network which is not required for cluster
communication
Note: Repeat the above process if more than two networks which are
not required for cluster communication are involved in the Cluster
Setup.
To create and secure a file share for the node and file share
majority quorum
1 Create a new folder on the system that will host the share
directory.
2 Right-click the folder that you created and click Properties. The
Quorum Properties window for the folder you created appears.
3 Click the Sharing tab, and then click Advanced Sharing. The
Advanced Sharing window appears.
4 Select the Share this folder check box and click Permissions. The
Permissions for Quorum window appears.
6 In the Enter the object name to select box, enter the two node
names used for the cluster in the medium node configuration and
click OK. The node names are added and the Permissions for
Quorum window appears.
7 Select the Full Control, Change, and Read check boxes and click
OK. The Properties window appears.
8 Click Ok. The folder is shared and can be used to create virtual
machines.
To configure a node and file share majority quorum using the
failover cluster management tool
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 Right-click the name of the cluster you created and click More
Actions. Click Configure Cluster Quorum Settings. The Configure
Cluster Quorum Wizard window appears.
3 View the instructions on the wizard and click Next. The Select
Quorum Configuration area appears.
Note: The Before you Begin screen appears the first time you run the
wizard. You can hide this screen on subsequent uses of the wizard.
Note: Click the Node Majority option if the cluster is configured for
node majority or a single quorum resource. Click the Node and Disk
Majority option if the number of nodes is even and not part of a
multisite cluster. Click the No Majority: Disk Only option if the disk
being used is only for the quorum.
Note: You can either enter the server name or click Browse to select
the relevant shared path.
6 The details you selected are displayed. To confirm the details click
Next. The Summary screen appears and the configuration details
of the quorum settings are displayed.
After you configure the cluster quorum, you must validate the cluster.
For more information, refer to
http://technet.microsoft.com/en-us/library/bb676379(EXCHG.80).aspx.
Configuring Storage
For any virtualization environment, storage is one of the central
barriers to implementing a good virtualization strategy. But with
Hyper-V, VM storage is kept on a Windows file system. Users can put
VMs on any file system that a Hyper-V server can access. As a result,
you can build HA into the virtualization platform and storage for the
virtual machines. This configuration can accommodate a host failure
by making storage accessible to all Hyper-V hosts so that any host can
run VMs from the same path on the shared folder. The back-end part
of this storage can be a local storage area network, iSCSI or whatever
is available to fit the implementation.
System Processor
Application Engine 2 80 GB
(Runtime node) Virtual
Machine
Historian Client 80 GB
Configuring Hyper-V
Microsoft Hyper-V Server 2008 R2 helps in creating virtual
environment that improves server utilization. It enhances patching,
provisioning, management, support tools, processes, and skills.
Microsoft Hyper-V Server 2008 R2 provides live migration, cluster
shared volume support, expanded processor, and memory support for
host systems.
Hyper-V is available in x64-based versions of Windows Server 2008 R2
operating system, specifically the x64-based versions of Windows
Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and
Windows Server 2008 Datacenter.
The following are the pre-requisites to set up Hyper-V:
• x64-based processor
• Hardware-assisted virtualization
• Hardware Data Execution Prevention (DEP)
To configure Hyper-V
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 In the Roles Summary area, click Add Roles. The Add Roles
Wizard window appears.
Note: You can also right-click Roles, and then click Add Roles Wizard
to open the Add Roles Wizard window.
3 View the instructions on the wizard and click Next. The Select
Server Roles area appears.
4 Select the Hyper-V check box and click Next. The Create Virtual
Networks area appears.
5 Select the check box next to the required network adapter to make
the connection available to virtual machines. Click Next. The
Confirmation Installation Selections area appears.
3 View the instructions in the Before You Begin area and click Next.
The Specify Name and Location area appears.
c In the Location box, enter the location where you want to store
the virtual machine.
Note: You can either enter the path to the filename or click Browse to
select the relevant server name.
6 Select the network to be used for the virtual machine and click
Next. The Connect Virtual Hard Disk area appears.
7 Click the Create a virtual hard disk option and then do the
following:
a In the Name box, enter the name of the virtual machine.
b In the Location box, enter the location of the virtual machine.
Note: You can either enter the location or click Browse to select the
location of the virtual machine and click Next.
c In the Size box, enter the size of the virtual machine and then
click Next. The Installation Options area appears.
Note: You need to click either the Use an existing virtual hard disk
or the Attach a virtual hard disk later option, only if you are using an
existing virtual hard disk, or you want to attach a virtual disk later.
8 Click the Install an operating system later option and click Next.
The Completing the New Virtual Machine Window area appears.
9 Click Finish. The virtual machine is created with the details you
provided. As we have started this process from the Failover
Cluster Manager, after completing the process of creating a virtual
machine, the High Availability Wizard window appears
10 Click View Report to view the report or click Finish to close the
High Availability Wizard window.
Note: You can use the above procedure to create multiple virtual
machines with appropriate names and configuration.
4 Navigate to the Dependencies tab and select the nicha Script from
the Resource Combo box and press OK.
IO tags Historized
Virtual Node (Approx.) tags(Approx.)
Topic 0 500 14 5
Topic 1 1000 1 1
Topic 39 1000 4 4
• Engine 1 : 9
• Engine 2 : 2
• Engine 3 : 492
• Engine 4 : 312
• Engine 5 : 507
• Engine 6 : 2
• Engine 7 : 24
• Engine 8 : 24
• Engine 9 : 250
• Engine 10: 508
• Engine 11: 506
• Engine 12: 4
• Engine 13: 22
• Engine 14: 1
• Engine 15: 1
Number of DI objects: 6
Scenario Observation
Quick Migration
RPO
Products RTO
Products
RTO RPO
InTouch 335 Sec + time Data Loss for $Second tag 6 Min 47 Sec.
taken by the (Imported to Historian)
user to start
the
InTouchView
Products
RTO RPO
Products
RTO RPO
InTouch 150 sec + time Data Loss for $Second tag 4 Min 14 Sec
taken by the (Imported to Historian)
user to start
the
InTouchView
Products
RTO RPO
Data Loss
Tags Duration
Products
RTO RPO
Products
RTO RPO
Scenario Observation
"Quick Migration of
AppEngine1" on page 208
"Quick Migration of
AppEngine2" on page 211
Observations
There are no errors and warnings observed on all the virtual nodes.
Live Migration of GR
Trends:
In the Historian Trend below, the first tag
SineWaveCal_001.SinewaveValue receives data from scripts on GR
node and is historized.
This is also captured in the RPO for Live Migration of GR node for IAS
Tag (Script).
Observations
During Live Migration of the GR node, it is expected that
GR node is Platform 1 in the Galaxy deployed. Therefore it is
obvious that there will be an instance during Live Migration when
the following occurs:
• GR node will not be able to connect to the other deployed nodes
(Platforms 2, 3, 4, and 5).
• Rest of virtual machines will not be able to connect to GR node
(Platform1)
GR node
530396202/17/20116:16:25 PM2708516WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
530396212/17/20116:16:25 PM2708516WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
530396222/17/20116:16:25 PM2708516WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
530396232/17/20116:16:25 PM2708516WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine1 node
1134668232/17/20116:16:17 PM29883004WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine2 node
55437632/17/20116:16:17 PM30602176WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
Historian
None
WIS node
None
HistClient
None
This is also captured in the RPO for Live Migration of AppEngine1 for
Industrial Application Server (IAS) IO tag (DASSiDirect).
This is also captured in the RPO for Live Migration of AppEngine1 for
IAS tag (Script).
Observations
Error log during Live Migration of AppEngine1
During Live Migration, it is expected that
AppEngine1 is Platform2 in the Galaxy deployed. Therefore it is
obvious that there will be an instance during Live Migration when
the following occurs:
• AppEngine1 will not be able to connect to the other deployed
nodes (Platforms 3, 4, and 5).
• Other nodes will not be able to connect to
AppEngine1(Platform2)
• Some data sent from AppEngine1 will be discarded till the
TimeSync utility is executed and system time of AppEngine1 is
synchronized. So Historian is bound to discard data from
AppEngine1 node
GR node
529810502/17/20114:09:12 PM2708516WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine1
1133382602/17/20114:09:19 PM29883004WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1133382612/17/20114:09:19 PM29883004WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1133382622/17/20114:09:19 PM29883004WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1133382632/17/20114:09:19 PM29883004WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1133382642/17/20114:09:19 PM29883004WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine2
55432652/17/20114:09:12 PM30602176WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
Historian
236000202/17/20114:10:03 PM1928968WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 28898, 2011/02/17 10:39:13.246,
2011/02/17 10:40:02.708) [HISTORIAN; pipeserver.cpp; 2388]
aahCfgSvc
236000452/17/20114:14:16 PM1928968WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 28898, 2011/02/17 10:39:13.246,
2011/02/17 10:40:02.708) [HISTORIAN; pipeserver.cpp; 2388;
15135]aahCfgSvc
WIS node
None
HistClient
None
This is also captured in the RPO for Live Migration of AppEngine2 for
IAS IO tag (DASSiDirect).
In the Historian Trend below, the first tag is modified using scripts
and is historized from Platform AppEngine2.
This is also captured in the RPO for Live Migration of AppEngine2 for
IAS Tag tag (Script).
Observations
Error log during Live Migration of AppEngine2
During Live Migration, it is expected that
• AppEngine2 is Platform4 in the Galaxy deployed, so it is obvious
that there will be an instance during Live Migration when
• AppEngine2 will not be able to connect to the other deployed nodes
(Platforms 3, 1, and 2).
• Other nodes will not be able to connect to AppEngine2(Platform4)
• Some data sent from AppEngine2 will be discarded till the
TimeSync utility is executed and system time of AppEngine2 is
synchronized. So Historian is bound to discard data from
AppEngine2 node
GR node
529937522/17/20114:43:55 PM2708516WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine1
The below warnings are observed on the AppEngine1 node during
the Live Migration of AppEngine2.
1133740042/17/20114:43:55 PM29883004WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine2
55432662/17/20114:44:02 PM30602176WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55432672/17/20114:44:02 PM30602176WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55432682/17/20114:44:02 PM30602176WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55432692/17/20114:44:02 PM30602176WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55432702/17/20114:44:02 PM30602176WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
Historian
236003182/17/20114:44:47 PM1928968WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 20418, 2011/02/17 11:14:00.347,
2011/02/17 11:14:46.178) [HISTORIAN; pipeserver.cpp; 2388]
aahCfgSvc
236003622/17/20114:49:00 PM1928968WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 20418, 2011/02/17 11:14:00.347,
2011/02/17 11:14:46.178) [HISTORIAN; pipeserver.cpp; 2388;
1570]aahCfgSvc
WIS node
None
HistClient
None
Observations
When the Historian Node undergoes Live Migration, the following is
observed:
• GR and AppEngine nodes are in the Store Forward mode.
• Historian Client Trend will not be able to connect to Historian,
so warning is expected.
• Historian's time is not synchronized during the Live Migration
of Historian, so the"Attempt to store values in the future"
message is expected.
After Live Migration, Historian's time needs to be synchronized.
Therefore, the server shifting warning is expected.
AppEngine1
None
AppEngine2
None
Historian
235940332/17/20119:07:34 AM37963388WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 19594, 2011/02/17 03:34:41.543,
2011/02/17 03:37:33.054) [HISTORIAN; pipeserver.cpp; 2388]
235940522/17/20119:10:28 AM37963388WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas, Integer_001.str1,
19598, 2011/02/17 03:40:27.073, 1) [HISTORIAN;
pipeserver.cpp; ...
WIS node
None
HistClient
None
Observations
No error and warnings observed on the nodes
Quick Migration of GR
Trends:
Observations
During Quick Migration of the GR node, it is expected that
GR node is Platform 1 in the Galaxy deployed. Therefore it is
obvious that there will be an instance during Quick Migration
when the following occurs:
• GR node will not be able to connect to the other deployed
nodes(Platforms 2, 3, 4, and 5).
• Rest of virtual machines will not be able to connect to GR node
(Platform1)
Since DAS SI Direct provides data to DDESuiteLinkObject and
DAS SI Direct is on the GR Node, a message- "DAServerManager
Target node is down" -is expected on AppEngine1 and AppEngine2.
Script involving SDK script library calls timeout during Quick
Migration of the GR Node.
GR node
332550721/26/20116:30:35 PM30844484WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms. 33255074
1/26/20116:30:35 PM30844484WarningNmxSvcPlatform 2 exceed
332637691/26/20116:56:57 PM23405200WarningCRLinkServer
CQueue::BroadCastData Ping() hr:0x800706baaaGR
AppEngine1
934921901/26/20116:30:27 PM54804996WarningDAServerManagerTarget
node is down. DASCC will stop scanning for Server active
state.mmc
934922721/26/20116:30:31 PM16522908WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
934933061/26/20116:31:27 PM60966480WarningPackageManagerNet
GalaxyMonitor PollPackageServer Communication failed. A
connection attempt failed because the connected party did not
properly respond after a period of time, or established
connection failed because connected host has failed to respond
10.91.60.33:8090aaIDE
934963991/26/20116:34:11 PM39043908WarningScriptRuntime
Insert.InsertValuePerodic: Script timed out.aaEngine
935109161/26/20116:46:52 PM39043908WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
AppEngine2
54454601/26/20116:30:27 PM8963888WarningDAServerManagerTarget
node is down. DASCC will stop scanning for Server active
state.
54454651/26/20116:33:07 PM20882104WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
WIS node
9576531/26/20116:30:30 PM37641516WarningNmxSvcPlatform 1 exceed
maximum heartbeats timeout of 8000 ms.NmxSvcHAWISNODE
80.119.175.64
Observations
During Quick Migration of the AppEngine1 Node, it is expected
that:
• AppEngine1 Node is Platform 2 in the Galaxy deployed.
Therefore, it is obvious that there will be an instance during
Quick Migration when the following occurs:
• AppEngine1 Node will not be able to connect to the other
deployed nodes (Platforms 1, 3, 4, and 5).
• Rest of virtual machines will not be able to connect to
AppEngine1 (Platform2).
• AppEngine1 is not synchronized during the quick migration
and a message -"Values in the past did not fit within the
realtime window" -is expected on AppEngine1.
• AppEngine1 is not synchronized during the Quick Migration
and a message- "Values in the past did not fit within the
realtime window"- is expected on the Historian node.
GR node
None
AppEngine1
1098432442/14/20116:45:15 PM21684544WarningaaEngine0:A8C Values
in the past did not fit within the realtime window; discarding
data (19713, 2011/02/14 13:09:48.109, 2011/02/14
13:11:02.363) [aahMDASSF.cpp; 3709; 15]aaEngine
AppEngine2
None
Historian
None
HistClient
None
WIS
None
Observations
During Quick Migration of AppEngine2 Node, it is expected that:
• AppEngine2 Node is Platform 4 in the Galaxy deployed.
Therefore, it is obvious that there will be an instance during
Quick Migration when the following occurs:
• AppEngine2 Node will not be able to connect to the other
deployed nodes (Platforms 1, 3, 2, and 5).
• Rest of Virtual machines will not be able to connect to
AppEngine2 (Platform4)
• AppEngine2 is not synchronized during the quick migration
and a message- "Values in the past did not fit within the
realtime window"- is expected on AppEngine2.
• AppEngine2 is not synchronized during the Quick Migration
and a message- "Values in the past did not fit within the
realtime window"- is expected on the Historian node.
GR node
None
AppEngine1
None
AppEngine2
55327542/14/20117:19:02 PM27682308WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
55327552/14/20117:19:02 PM27682308WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
55327562/14/20117:19:02 PM27682308WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.
55327572/14/20117:19:02 PM27682308WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.
Historian:
None
HistClient
None
WIS
None
$Second(InTouch)
SysTimeSec (Historian)
GR node
511042092/15/20113:56:25 PM38043808WarningScriptRuntime
AsynchronousBuffer.script6: Script timed out.aaEngine
511042682/15/20113:56:35 PM38043808WarningScriptRuntime
AsynchronousBuffer.script2: Script timed out.aaEngine
511044362/15/20113:57:03 PM38043808WarningScriptRuntime
AsynchronousBuffer.script3: Script timed out.aaEngine
AppEngine1
1112477922/15/20113:57:53 PM29083924WarningScriptRuntime
InsertAsync.InsertValueWF: Script timed out.aaEngine
1112477932/15/20113:57:53 PM29083932WarningScriptRuntime
InsertAsync.InsertValuePerodic: Script timed out.aaEngine
1112478062/15/20113:57:53 PM29082720WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
AppEngine2
55351222/15/20113:56:31 PM24722468WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55351232/15/20113:57:31 PM24722468WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
Historian
235771112/15/20113:57:49 PM23681100WarningaahCfgSvc"Server time
is shifting (Expected time, Current time)"
(02/15/11,15:57:48,174, 02/15/11,15:54:50,984) [HISTORIAN;
Config.cpp; 2040]aahCfgSvc
Hist Client
6476362/15/20113:55:49 PM20963696WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
AppEngine1
AppEngine2
GR node
Historian
$Second (InTouch)
SysTimeSec (Historian)
Observations
Error log after failover
GR node
515463092/16/20119:58:36 AM28161980Warning
EnginePrimitiveRuntimeEng:: m_GDC->GetFile failed. Error 2
(0x00000002): The system cannot find the file specified
aaEngine
515463832/16/20119:58:51 AM28161980WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
AppEngine1
1121667922/16/20119:58:17 AM28602864WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
1121671472/16/20119:58:59 AM2740388WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
1121671482/16/20119:58:59 AM2740388WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
1121676622/16/20119:59:59 AM28602864WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
AppEngine2
55385612/16/20119:57:50 AM31283132WarningScriptRuntime
HourVal_003.GenerateHourValue: Script timed out.aaEngine
55385752/16/20119:57:56 AM24521672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55385952/16/20119:58:56 AM24521672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55385962/16/20119:59:56 AM24521672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55385982/16/201110:00:56 AM24521672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55386042/16/201110:01:56 AM24521672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
Historian
None
HistClient
None
WIS
None
AppEngine1
AppEngine2
GR
Historian
$ Second (InTouch)
SysTimeSec
Observations
Error log after failover
GR node
516659502/16/20113:05:39 PM30563060Warning
EnginePrimitiveRuntimeEng:: m_GDC->GetFile failed. Error 2
(0x00000002): The system cannot find the file specified
aaEngine
516660272/16/20113:06:03 PM30563060WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
AppEngine1
1123841902/16/20112:55:38 PM29923008WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1123841912/16/20112:55:38 PM29923008WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1123843442/16/20113:05:30 PM28922896WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
1123846742/16/20113:06:07 PM20161660WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
1123846752/16/20113:06:07 PM20161660WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
1123987662/16/20113:20:23 PM620436WarningInTouchProxyFailed to
reconnecnt to data source aaEngine
1123987832/16/20113:20:24 PM620436WarningInTouchProxyFailed to
reconnecnt to data source aaEngine
AppEngine2
55394192/16/20112:55:39 PM26801844WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55394202/16/20112:55:39 PM26801844WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55395812/16/20113:03:41 PM31363140WarningScriptRuntime
P10250.SetValues: Script timed out.aaEngine
55395822/16/20113:03:41 PM30843088WarningScriptRuntime
P20250.SetValues: Script timed out.aaEngine
55396192/16/20113:03:56 PM32123216WarningScriptRuntime
HourVal_003.GenerateHourValue: Script timed out.aaEngine
55396552/16/20113:04:09 PM23362332WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
Historian
235893212/16/20113:08:05 PM24164052WarningaahCfgSvc"Server time
is shifting (Expected time, Current time)"
AppEngine1
AppEngine2
GR
Observations
Error log after failover
GR Node
None
AppEngine1
1125514032/16/20116:17:30 PM29762996WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine2
55402932/16/20116:17:31 PM26962720WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
Historian
235917472/16/20116:33:36 PM21444220WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (historian_2, $Second, 38883,
2011/02/16 13:03:34.645, 2) [HISTORIAN; pipeserver.cpp; 1831]
aahCfgSvc
WIS
59751322/16/20116:17:35 PM37842276WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine1
1140593102/18/20115:31:25 PM36963700WarningScriptRuntime
Insert.InsertValueOT: Script timed out.aaEngine
1140644952/18/20115:37:24 PM36963700WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
1140662112/18/20115:39:26 PM36963700WarningScriptRuntime
Insert.InsertValueWT: Script timed out.aaEngine
1140662282/18/20115:39:30 PM36963700WarningScriptRuntime
Insert.InsertValuePerodic: Script timed out.aaEngine
AppEngine2
None
GR node
None
Historian node
236067632/18/20115:10:40 PM33842780WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (1024, 3, 0,
203203432, 5893027, 15) [HISTORIAN; deltastore.cpp; 2669]
aahCfgSvc
236067642/18/20115:10:52 PM33842780WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (1024, 3, 0,
209096459, 4186312, 11) [HISTORIAN; deltastore.cpp; 2669]
aahCfgSvc
236067672/18/20115:11:24 PM33842780WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
WIS node
59756382/18/20115:11:23 PM36723676WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
From the above trend, it is observed that there is data loss for IAS IO
tags and script tags also on the GR Node and the AppEngine nodes.
Historian node
236067282/18/20115:07:12 PM33842780WarningaahStoreSvcAttempt
to store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas, DifferentQuality.goodbool,
19605, 2011/02/18 11:37:11.126, 1) [HISTORIAN; pipeser...aahCfgSvc
From the above trend, it is observed that there is data loss for IAS IO
tags and script tags on the GR node and the AppEngine nodes.
Chapter 3
• On the host servers disable all the network cards which are not
utilized by the System Platform Environment. This is to avoid any
confusion during the network selections while setting up the
cluster.
• As per the topology described above for the Disaster Recovery
environment, only one network is used for all communications. If
multiple networks are being used, then make sure only the
primary network which is used for the communication between the
Hyper-V Nodes is enabled for the Failover Cluster
Communication. Disable the remaining cluster networks in
Failover Cluster Manager.
• Ensure the virtual networks created in Hyper-V Manager have the
same name across all the nodes which are participating in the
Cluster. Otherwise, migration/failover of Hyper-V virtual
machines will fail.
The following table lists the impact on CPU utilization and Bandwidth
with various Compression Levels.
• Medium Configuration Load: Approx. 50000 IO Points with
Approx. 20000 attributes being historized
• Network: Bandwidth controller with bandwidth: 45Mbps and
No Latency
These readings are when the mirroring is continuously happening
between the source and destination storage SANs when all the VM
are running on the source host server. The data captured shows
that the % CPU utilization of the SteelEye mirroring process
increases with increasing compression levels. Based on these
findings we recommend Compression Level 2 in the Medium scale
virtualization environment.
% Processor
Time
(ExtMirrSvc) -
SteelEye % Processor
Mirroring Time (CPU) -
process Overall CPU Total Bytes / Sec
Historian
• In case of Live and Quick migration of Historian, you may
notice that Historian logs values with quality detail 448 and
there may be values logged twice with same timestamps. This
is because the suspended Historian VM starts on the other
cluster node with the system time it was suspended at before
the migration. As a result, some of the data points it is
receiving with the current time seem to be in the future to the
Historian. This results in Historian modifying the timestamps
to its system time and updating the QD to 448. This happens
until the system time of the Historian node catches up with the
real current time using the TimeSync utility, after which the
problem goes away. So, it is recommended to stop the historian
before the migration and restart it after the VM is migrated
and its system time is synced up with the current time.
• Live and Quick migration of Historian should not be done when
the block change over is in progress on the Historian node.
• If a failover happens (for example, due to a network disconnect
on the source Host Virtualization Server) while the Historian
status is still “Starting”, the Historian node fails over to the
target Host Virtualization Server. In the target host, Historian
fails to start. To recover from this state, kill the Historian
services that failed to start and then start the Historian by
launching the SMC.
InTouch
• Ensure that InTouch Window Viewer is added to the Start Up
programs so that the view is started automatically when the
virtual machine reboots.
Application Server
• If a failover happens (for example, due to a network disconnect
on the source Host Virtualization Server) while the Galaxy
Migration is in progress, the GR node fails over to the target
Host Virtualization Server. In the target host, on opening the
IDE for the galaxy, the templates do not appear in the
Template toolbox and in Graphic toolbox. To recover from this
state, delete the galaxy and create new Galaxy. Initiate the
migration process once again.
• If a failover happens (for example, due to an abrupt power-off
on the source Host Virtualization Server) while a platform
deploy is in progress, the Platform node fails over to the target
Host Virtualization Server. In the target host, some objects will
be in deployed state and the rest will be in undeployed state. To
recover from this state, redeploy the whole Platform once
again.
• If a failover happens (for example, due to an abrupt power-off
on the source Host Virtualization Server) while a platform
undeploy is in progress, the Platform node fails over to the
target Host Virtualization Server. In the target host, some
objects will be in undeployed state and the rest will be in
deployed state. To recover from this state, undeploy the whole
Platform once again.
Hyper-V Hosts
Memory 12GB
Note: For the Hyper-V Host to function optimally, the server should
have the same processor, RAM, storage and service pack level.
Preferably, the servers should be purchased in pairs to avoid hardware
discrepancies. Though the differences are supported, it will impact the
performance during failovers.
Virtual Machines
Using the above Specified Hyper-V Host , three virtual machines can
be created with below Configuration.
Memory 4 GB
Storage 80 GB
Memory 2 GB
Storage 40 GB
Memory 4 GB
Storage 40 GB
Network Requirements
For this architecture, you can use one physical network card that
needs to be installed on a host computer for domain network and the
process network.
Note: Repeat the procedure to include on all the other nodes that will
be part of the Cluster configuration process.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
5 Click the Run only the tests I select option to skip the storage
validation process, and click Next. The Test Selection screen
appears.
Note: Click the Run all tests (recommended) option to validate the
default selection of tests.
6 Clear the Storage check box, and then click Next. The Summary
screen appears.
7 Click View Report to view the test results or click Finish to close
the Validate a Configuration Wizard window.
A warning message appears indicating that all the tests have not been
run. This usually happens in a multisite cluster where the storage
tests are skipped. You can proceed if there is no other error message. If
the report indicates any other error, you need to fix the problem and
re-run the tests before you continue. You can view the results of the
tests after you close the wizard in
SystemRoot\Cluster\Reports\Validation Report date and time.html
where SystemRoot is the folder in which the operating system is
installed (for example, C:\Windows).
To know more about cluster validation tests, click More about cluster
validation tests on Validate a Configuration Wizard window.
Creating a Cluster
To create a cluster, you need to run the Create Cluster wizard.
To create a cluster
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Managerwindow from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
4 View the instructions and click Next. The Validation Warning area
appears.
Note: You can either type the server name or click Browse to select
the relevant server name.
7 In the Cluster Name box, type the name of the cluster and click
Next. The Confirmation area appears.
8 Click Next. The cluster is created and the Summary area appears.
9 Click View Report to view the cluster report created by the wizard
or click Finish to close the Create Cluster Wizard window.
The file share to be used for the node and File Share Majority quorum
must be created and secured before configuring the failover cluster
quorum. If the file share has not been created or correctly secured, the
following procedure to configure a cluster quorum will fail. The file
share can be hosted on any computer running a Windows operating
system.
To configure the cluster quorum, you need to perform the following
precedures:
• Create and secure a file share for the node and file share majority
quorum
• Use the failover cluster management tool to configure a node and
file share majority quorum
To create and secure a file share for the node and file share
majority quorum
1 Create a new folder on the system that will host the share
directory.
2 Right-click the folder that you created and click Properties. The
Quorum Properties window for the folder you created appears.
3 Click the Sharing tab, and then click Advanced Sharing. The
Advanced Sharing window appears.
4 Select the Share this folder check box and click Permissions. The
Permissions for Quorum window appears.
6 In the Enter the object name to select box, enter the two node
names used for the cluster in the small node configuration and
click OK. The node names are added and the Permissions for
Quorum window appears.
7 Select the Full Control, Change, and Read check boxes and click
OK. The Properties window appears.
8 Click OK. The folder is shared and can be used to create virtual
machines.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 Right-click the name of the cluster you created and click More
Actions. Click Configure Cluster Quorum Settings. The Configure
Cluster Quorum Wizard window appears.
3 View the instructions on the wizard and click Next. The Select
Quorum Configuration area appears.
Note: The Before you Begin screen appears the first time you run the
wizard. You can hide this screen on subsequent uses of the wizard.
Note: Click the Node Majority option if the cluster is configured for
node majority or a single quorum resource. Click the Node and Disk
Majority option if the number of nodes is even and not part of a
multisite cluster. Click the No Majority: Disk Only option if the disk is
being used only for the quorum.
Note: You can either enter the share name or click Browse to select
the relevant shared path.
6 The details you selected are displayed. To confirm the details, click
Next. The Summary screen appears and the configuration details
of the quorum settings are displayed.
After you configure the cluster quorum, you must validate the cluster.
For more information, refer to
http://technet.microsoft.com/en-us/library/bb676379(EXCHG.80).aspx.
Configuring Storage
For a smaller virtualization environment, storage is one of the central
considerations in implementing a good virtualization strategy. But
with Hyper-V, VM storage is kept on a Windows file system. You can
put VMs on any file system that a Hyper-V server can access. As a
result, HA can be built into the virtualization platform and storage for
the virtual machines. This configuration can accommodate a host
failure by making storage accessible to all Hyper-V hosts so that any
host can run VMs from the same path on the shared folder. The
back-end part of this storage can be a local, storage area network,
iSCSI, or whatever is available to fit the implementation.
For this architecture, local partitions are used.
The following table lists the minimum storage recommendations to
configure storage for each VM:
Configuring Hyper-V
Microsoft Hyper-V Server 2008 R2 helps in creating a virtual
environment that improves server utilization. It enhances patching,
provisioning, management, support tools, processes, and skills.
Microsoft Hyper-V Server 2008 R2 provides live migration, cluster
shared volume support, expanded processor, and memory support for
host systems.
Hyper-V is available in x64-based versions of Windows Server 2008 R2
operating system, specifically the x64-based versions of Windows
Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and
Windows Server 2008 Datacenter.
To configure Hyper-V
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
2 In the Roles Summary area, click Add Roles. The Add Roles
Wizard window appears.
Note: You can also right-click Roles and then click Add Roles Wizard
to open the Add Roles Wizard window.
3 View the instructions on the wizard and then click Next. The
Select Server Roles area appears.
4 Select the Hyper-V check box and click Next. The Create Virtual
Networks area appears.
5 Select the check box next to the required network adapter to make
the connection available to virtual machines. Click Next. The
Confirmation Installation Selections area appears.
3 Type the relevant job name and description in the Job name and
Job description boxes, and then click Create Job. The New Mirror
window appears.
After creating all the Mirroring Jobs, Open the SteelEye DataKeepr
UI from the All Programs menu, click SteelEye DataKeeper MMC. The
DataKeeper window appears.
You can navigate to Job Overview under Reports to view all the Jobs
in one place.
You can navigate to Server Overview under Reports to view all the
servers involved in job replication in one place.
3 View the instructions in the Before You Begin area and click Next.
The Specify Name and Location area appears.
Note: You can either type the location or click Browse to select the
location where you want to store the virtual machine.
6 Select the network to be used for the virtual machine and click
Next. The Connect Virtual Hard Disk area appears.
7 Click the Create a virtual hard disk option button , and then do
the following:
a In the Name box, type the name of the virtual machine.
b In the Location box, enter the location of the virtual machine.
Note: You can either type the location or click Browse to select the
location of the virtual machine.
c In the Size box, type the size of the virtual machine , and then
click Next. The Installation Options area appears.
Note: You need to click either the Use an existing virtual hard disk
or Attach a virtual hard disk later option, only if you are using an
existing virtual hard disk or you want to attach a virtual disk later.
8 Click the Install an operating system later option and click Next.
The Completing the New Virtual Machine Window area appears.
9 Click Finish. The virtual machine is created with the details you
provided. As we have started this process from the Failover
Cluster Manager, after completing the process of creating a virtual
machine, the High Availability Wizard window appears.
10 Click View Report to view the report or click Finish to close the
High Availability Wizard window.
4 Select the volume that you had entered while creating a SteelEye
mirroring job and click OK. The Selection Confirmation window
appears.
5 Click OK to validate the details that you have entered. The Server
Manager window appears.
Note: To modify the selection, click Cancel and modify the detail as
required in the New DataKeeper Volume Properties window, and then
click Apply.
7 Click the Dependencies tab, then from the Resource list, select
the name of the DataKeeper Volume resource that you created and
click OK.
Note: You can use the above procedure to create multiple virtual
machines with appropriate names and configuration.
GR 10000 2500
Topic 1 10000 1 1
Topic 7 600000 40 16
Topic 39 1000 4 4
• Engine 1 : 9
• Engine 2 : 13
• Engine 3 : 13
• Engine 4 : 225
• Engine 5 : 118
• Engine 6 : 118
• Engine 7 : 195
• Engine 8 : 225
• Engine 9 : 102
• Engine 10: 2
• Engine 11: 3
• Engine 12: 21
• Engine 13: 1
• Engine 14: 1
The total number of DI objects is 6.
Scenario Observation
Live Migration
RPO
Primary Node Products RTO
Data Loss
Tags Duration
Data Loss
Tags Duration
Quick Migration
Data Loss
Tags Duration
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Primary
node Products RTO RPO
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
N/A N/A
N/A N/A
N/A N/A
N/A N/A
N/A N/A
Scenario Observation
The IAS tag (script) does not have any data loss as the data is stored in
the SF folder in the AppEngine1 node. This data is later forwarded
after Live Migration.
Observations
GR is Platform1 in the Galaxy deployed. During the Live Migration of
GR, it is obvious that there will be an instance during Live Migration
when the following occurs:
• GR, Historian will not be able to connect to the other deployed
nodes (Platforms 2, 3, and 4).
• Other nodes will not be able to connect to GR (Platform1).
• Some data sent from GR will be discarded till the TimeSync utility
is executed and system time of GR is synchronized.
• AppEngine node is in the Store Forward mode.
• Historian Client trend will not be able to connect to Historian, so
the warning message is expected.
• Historian machine’s time is not synchronized during the Live
Migration of Historian, so the “Attempt to store values in the
future” message is expected.
• After Live Migration, Historian’s time needs to be synchronized.
Therefore the server shifting warning message is expected.
After the Live Migration of GR,Historian node, the stored data is
forwarded from the AppEngine node. As a result, you see the following
warnings on each of the VM nodes.
GR node
361375951/26/201112:38:44 AM31243148WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
361375961/26/201112:38:44 AM31243148WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
361375971/26/201112:38:44 AM31243148WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
AppEngine node
216391171/26/201112:38:33 AM25562572WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
216392501/26/201112:39:15 AM24522072WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
17129361/26/201112:38:33 AM23923052WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
17129451/26/201112:38:47 AM27526332WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
17129561/26/201112:38:47 AM27523136WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
Observations
AppEngine is Platform2 in the Galaxy deployed. During the Live
Migration of AppEngine, it is obvious that there will be an instance
during Live Migration when the following occurs:
• AppEngine will not be able to connect to the other deployed nodes
(Platforms 1, 3, and 4).
• Other nodes will not be able to connect to AppEngine (Platform2)
• Some data sent from AppEngine will be discarded till the
TimeSync utility is executed and system time of AppEngine is
synchronized. So Historian is bound to discard data from the
AppEngine node.
As a result, you see the following warnings on each of the VM nodes.
GR node
364408421/27/20117:52:59 PM30083108WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
AppEngine
216460131/27/20117:52:51 PM28722876WarningScriptRuntime
Insert.InsertValueWT: Script timed out.
216460141/27/20117:53:11 PM24442460WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
WIS node
17134351/27/20117:52:59 PM40642196WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Observations
WIS node is Platform4 in the Galaxy deployed. During the Live
Migration of WIS node, it is obvious that there will be an instance
during Live Migration when the following occurs:
• WIS will not be able to connect to the other deployed nodes
(Platforms 1, 2, and 3).
• Other nodes will not be able to connect to WIS (Platform4)
• Historian Client trend will not be able to connect to Historian, so
warning is expected.
• Historian machine’s time is not synchronized during the Live
Migration of WIS node, so the “Attempt to store values in the
future” message is expected.
As a result, you see the following warnings on each of the VM nodes.
GR node
364213141/27/20116:04:19 PM30083068WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
364213151/27/20116:04:21 PM15562488WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (SGR_2, _PP$Time, 9092, 2011/01/27
12:34:20.343, 2) [SGR; pipeserver.cpp; 1831]
AppEngine node
216441281/27/20116:04:15 PM25482564WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
WIS node
17133261/27/20115:55:28 PM17843908Warning
ArchestrA.Visualization.WCFServiceLMX WCF service: Exception
while PushBackConfigurations(sync mode): [The operation has
timed out.] at [Server stack trace:
at
System.ServiceModel.Channels.InputQueue`1.Dequeue(TimeSpan
timeout)
at System.ServiceModel.Channels.ServicePollingDuple...
17133301/27/20116:00:16 PM51445748WarningaaTrendFailed to
acquire any type of license feature 'ActiveFactory_Pro' of
version 10.0
17133311/27/20116:00:21 PM51445748WarningaaTrendFailed to
acquire any type of license feature 'ActiveFactory_Pro' of
version 10.0
17133341/27/20116:04:22 PM40642196WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
Trends:
GR node hosts the DDESuiteLinkObjects. Therefore, during the Quick
Migration of GR node, there is data loss for tags that receive data from
DAS Server/PLC.
In the following Historian Trend, the GRnode tags are:
Tag1: SystemTimeSec tag of Historian.
These are captured in the RPO table for Quick Migration of the GR
node.
These are captured in the RPO table for Quick Migration of the GR
node.
Observations
GR is Platform1 in the Galaxy deployed. During the Quick Migration
of GR, it is obvious that there will be an instance during Quick
Migration when the following occurs:
• GR, Historian will not be able to connect to the other deployed
nodes (Platforms 2, 3, and 4).
• Other nodes will not be able to connect to GR (Platform1).
• Some data sent from GR will be discarded till the TimeSync utility
is executed and system time of GR is synchronized.
• AppEngine node is in the Store Forward mode.
• Historian Client trend will not be able to connect to Historian, so
warning message is expected.
• Historian machine's time is not synchronized during the Live
Migration of Historian, so the "Attempt to store values in the
future" message is expected.
• After Live Migration, Historian's time needs to be synchronized.
Therefore, the server shifting warning message is expected.
After the Quick Migration of GR, Historian node, the stored data is
forwarded from AppEngine node.
As a result, you see the following warnings on each of the VM nodes.
GR node
364855861/28/20113:30:43 PM34003416WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
364855871/28/20113:30:43 PM34003416WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
364855881/28/20113:30:43 PM34003416WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
364856211/28/20113:30:46 PM15442424WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (SGR_mdas, 9119, 2011/01/28 09:58:04.310, 2011/01/28
10:00:43.832) [SGR; pipeserver.cpp; 2388]aahCfgSvc
364857081/28/20113:35:00 PM15442424WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (SGR_mdas, 9119, 2011/01/28 09:58:04.310, 2011/01/28
10:00:43.832) [SGR; pipeserver.cpp; 2388; 2]aahCfgSvc
AppEngine node
216478441/28/20113:28:13 PM24442460WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
216479431/28/20113:29:28 PM28722876WarningScriptRuntime
Insert.InsertValueWT: Script timed out.
216479441/28/20113:30:23 PM28722876WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
17136281/28/20113:28:12 PM40642196WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
17136351/28/20113:29:06 PM58565984WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
Observations
AppEngine is Platform2 in the Galaxy deployed. During the Quick
Migration of AppEngine, it is obvious that there will be an instance
during Quick Migration when:
• AppEngine will not be able to connect to the other deployed nodes
(Platforms 1, 3, and 4).
• Other nodes will not be able to connect to AppEngine (Platform2)
• Some data sent from AppEngine will be discarded till the
TimeSync utility is executed and system time of AppEngine is
synchronized. So Historian is bound to discard data from
AppEngine node.
• AppEngine is not synchronized during the Quick Migration and a
message “Values in the past did not fit within the realtime
window” is expected on AppEngine.
As a result, you see the following warnings on each of the VM nodes.
GR node
364959491/28/20117:25:23 PM34363452WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine
216497051/28/20117:26:25 PM30563052WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
216498351/28/20117:27:00 PM320296WarningScriptRuntime
Insert.InsertValueWT: Script timed out.aaEngine
216499451/28/20117:31:06 PM320296WarningScriptRuntime
Insert.InsertValueWT: Script timed out.aaEngine
WIS node
17137061/28/20117:25:22 PM40642196WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc.
Observations
WIS node is Platform4 in the Galaxy deployed. During the Quick
Migration of WIS node, it is obvious that there will be an instance
during Quick Migration when
• WIS will not be able to connect to the other deployed nodes
(Platforms 1, 2, and 3).
• Other nodes will not be able to connect to WIS (Platform4)
• Historian Client trend will not be able to connect to Historian, so
warning is expected.
• Historian machine’s time is not synchronized during the Quick
Migration of WIS node, so “Attempt to store values in the future” is
expected.
As a result, you see the following warnings on each of the VM nodes.
GR node
364954901/28/20115:34:24 PM34363452WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
AppEngine node
216495751/28/20115:34:25 PM25882604WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
216495761/28/20115:35:10 PM320296WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
216495771/28/20115:35:16 PM320296WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
WIS node
17136831/28/20115:35:54 PM58563024WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
RPO value are been captured in RPO table of Quick Migration of All
nodes.
Observations
The following warnings are observed.
GR node
367411892/3/20111:22:07 PM2128668WarningNmxSvcPlatform 2 exceed
maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine
217672662/3/20111:18:18 PM19922372WarningScriptRuntime
Insert.InsertValueDataChange: Script timed out.aaEngine
217672672/3/20111:20:58 PM25642580WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
217672682/3/20111:20:58 PM25642580WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
217672732/3/20111:20:59 PM5961044WarningScanGroupRuntime2Can't
convert Var data type to MxDataTypeaaEngine
217672742/3/20111:20:59 PM5961044WarningScanGroupRuntime2Can't
convert Var data type to MxDataTypeaaEngine
217674042/3/20111:22:10 PM19922372WarningScriptRuntime
Insert.InsertValuePerodic: Script timed out.aaEngine
217674062/3/20111:22:15 PM19922372WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
WIS node
17158432/3/20111:18:28 PM38524104WarningNmxSvcPlatform 1 exceed
maximum heartbeats timeout of 8000 ms.NmxSvc
17158532/3/20111:21:20 PM60483068WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
Observations
GR node
365102481/29/20114:52:37 PM15523068WarningaahCfgSvc"Server time
is shifting (Expected time, Current time)"
365103161/29/20114:52:47 PM29082912WarningScriptRuntime
GR.privatembytes: Script timed out.
AppEngine node
216506701/29/20114:52:13 PM24762480WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
216509601/29/20114:52:52 PM3924936WarningDCMConnectionMgrOpen()
of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
WIS node
17138461/29/20114:56:32 PM23163916WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
17139371/29/20115:11:53 PM23163916WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
17141071/29/20115:27:42 PM57085456Warning
ArchestrA.Visualization.WCFServiceLMX WCF service: Exception
while PushBackConfigurations(sync mode): [The operation has
timed out.] at [
at
System.ServiceModel.Channels.InputQueue`1.Dequeue(TimeSpan
timeout)
at System.ServiceModel.Channels.ServicePollingDuple...w3wp
Observations
During the network disconnect of host server, all the nodes will be
moved to other Host server and all the VM's in the host server will get
restarted.
GR node
365197041/29/20116:35:37 PM16282572WarningaahCfgSvc"Server time
is shifting (Expected time, Current time)"
(01/29/11,18:35:36,507, 01/29/11,18:35:40,334) [SGR;
Config.cpp; 2040]aahCfgSvc
365197161/29/20116:35:43 PM28442848WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
365269551/29/20116:36:36 PM16282572WarningaahStoreSvcAttempt to
store value prior to disconnect time. Value discarded -
possible loss of data (SGR_mdas, 2916, 2011/01/29
13:06:36.090, 2011/01/29 13:06:35.519) [SGR; pipeserver.cpp;
958]aahCfgSvc
365271071/29/20116:40:53 PM16282572WarningaahStoreSvcAttempt to
store value prior to disconnect time. Value discarded -
possible loss of data (SGR_mdas, 2916, 2011/01/29
13:06:36.090, 2011/01/29 13:06:35.519) [SGR; pipeserver.cpp;
958; 56]aahCfgSvc
AppEngine node
216513881/29/20116:35:05 PM25002504WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
216516741/29/20116:35:41 PM22843016WarningScriptRuntime
Insert.InsertValueWT: Script timed out.aaEngine
216516861/29/20116:35:45 PM22844824WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
216516891/29/20116:35:46 PM22844824WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
216518961/29/20116:37:28 PM13162080WarningInTouchProxyFailed to
reconnect to data source aaEngine
WIS node
None
GR node
367202572/2/20117:32:56 PM15122572WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (2, 1, 0, 320749,
10166, 15) [SGR; deltastore.cpp; 2669]aahCfgSvc
367203072/2/20117:33:49 PM15122572WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (SGR_mdas, 9119, 2011/02/02 14:02:54.731, 2011/02/02
14:03:49.272) [SGR; pipeserver.cpp; 2388]aahCfgSvc
367203222/2/20117:34:06 PM35963184WarningScriptRuntime
AsynchronousBuffer.script2: Script timed out.aaEngine
367203592/2/20117:35:32 PM35963184WarningScriptRuntime
AsynchronousBuffer.script6: Script timed out.aaEngine
367203602/2/20117:35:33 PM29362940WarningaaEngineUnexpected
packet from ND Area_LateBool <P2 E4>, structId 938336640,
request 0, start 14583, end 14583, count 5, set 14578, sub
14578, state 1, commAlarm 0, subStatus <success -1 category 0
detectedBy 0 detail 0>, subQual 192, setCnt 313, setResultCnt
313,...aaEngine
367203772/2/20117:35:33 PM29362940WarningaaEngineUnexpected
packet from ND Area_DASTS <P2 E9>, structId 938336640, request
0, start 13753, end 13753, count 3, set 13751, sub 13751,
state 1, commAlarm 0, subStatus <success -1 category 0
detectedBy 0 detail 0>, subQual 192, setCnt 313, setResultCnt
313, wa...aaEngine
367203902/2/20117:35:33 PM29362940WarningaaEngineUnexpected
packet from ND MyEngine_RealTimeDASTS <P2 E9>, structId
938336640, request 0, start 4, end 4, count 1, set 2, sub 2,
state 1, commAlarm 0, subStatus <success -1 category 0
detectedBy 0 detail 0>, subQual 192, setCnt 313, setResultCnt
313, waitSc...aaEngine
367204172/2/20117:35:51 PM15122572WarningaahCfgSvcDriver
stopped (System driver, 2011/02/02 19:35:01.356) [SGR;
aahCfgSvc.cpp; 8401]aahCfgSvc
367204242/2/20117:35:51 PM15122572WarningaahStoreSvcAttempt to
store value prior to disconnect time. Value discarded -
possible loss of data (SGR_sysdrv, 12, 2011/02/02
14:05:48.060, 2011/02/02 14:05:48.000) [SGR; pipeserver.cpp;
958]aahCfgSvc
AppEngine node
217656272/2/20117:33:35 PM32044068WarningaaEngine0:A00 Snapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (1, 2, 0, 0,
9156, 22) [minideltastore.cpp; 2211; 1]aaEngine
WIS node
17157102/2/20117:36:25 PM60485304WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
17157122/2/20117:36:27 PM60484536WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
From the above trend, it is observed that there that there are frequent
data losses for all types tags IO and Script generated from GR and
AppEngine nodes.
AppEngine node
217655642/2/20117:29:03 PM26242644WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
217655722/2/20117:32:25 PM26242644WarningScriptRuntime
Insert.InsertValueWT: Script timed out.aaEngine
From the above trend, it is observed that there is data loss for Script
tags of AppEngine node.
Hyper-V Hosts
Memory 48 GB
Note: For the Hyper-V Host to function optimally, the server should
have the same processor, RAM, storage and service pack level.
Preferably the servers should be purchased in pairs to avoid hardware
discrepancies. Though the differences are supported, it will impact the
performance during failovers.
Virtual Machines
Using the Hyper-V host specified above, seven virtual machines can be
created in the environment with the configuration given below.
Memory 8 GB
Storage 200 GB
Memory 8 GB
Storage 100 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Network Requirements
For this architecture, you can use one physical network card that
needs to be installed on a host computer for the domain network and
the process network.
This setup requires a minimum of two host servers and two storage
servers connected to each host independently. Another independent
node is used for configuring the quorum. For more information on
configuring the quorum, refer to "Configuring Cluster Quorum
Settings" on page 348.
The following procedures help you install and configure a failover
cluster that has two nodes to set up on medium configuration.
To install the failover cluster feature, you need to run Windows Server
2008 R2 Enterprise Edition on your server.
Note: Repeat the procedure to include on all the other nodes that will
be part of the Cluster configuration process.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
5 Click the Run only the tests I select option to skip the storage
validation process, and click Next. The Test Selection screen
appears.
Note: Click the Run all tests (recommended) option to validate the
default selection of tests.
6 Clear the Storage check box, and then click Next. The Summary
screen appears.
7 Click View Report to view the test results or click Finish to close
the Validate a Configuration Wizard window.
A warning message appears indicating that all the tests have not been
run. This usually happens in a multisite cluster where the storage
tests are skipped. You can proceed if there is no other error message. If
the report indicates any other error, you need to fix the problem and
re-run the tests before you continue. You can view the results of the
tests after you close the wizard in
SystemRoot\Cluster\Reports\Validation Report date and time.html
where SystemRoot is the folder in which the operating system is
installed (for example, C:\Windows).
To know more about cluster validation tests, click More about cluster
validation tests on Validate a Configuration Wizard window.
Creating a Cluster
To create a cluster, you need to run the Create Cluster wizard.
To create a cluster
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
4 View the instructions and click Next. The Validation Warning area
appears.
Note: You can either type the server name or click Browse to select
the relevant server name.
7 In the Cluster Name box, type the name of the cluster and click
Next. The Confirmation area appears.
8 Click Next. The cluster is created and the Summary area appears.
9 Click View Report to view the cluster report created by the wizard
or click Finish to close the Create Cluster Wizard window.
You must create and secure the file share that you want to use for the
node and the file share majority quorum before configuring the
failover cluster quorum. If the file share has not been created or
correctly secured, the following procedure to configure a cluster
quorum will fail. The file share can be hosted on any computer running
a Windows operating system.
To configure the cluster quorum, you need to perform the following
precedures:
• Create and secure a file share for the node and file share majority
quorum
• Use the failover cluster management tool to configure a node and
file share majority quorum
To create and secure a file share for the node and file share
majority quorum
1 Create a new folder on the system that will host the share
directory.
2 Right-click the folder that you created and click Properties. The
Quorum Properties window for the folder you created appears.
3 Click the Sharing tab, and then click Advanced Sharing. The
Advanced Sharing window appears.
4 Select the Share this folder check box and click Permissions. The
Permissions for Quorum window appears.
6 In the Enter the object name to select box, enter the two node
names used for the cluster in the medium node configuration and
click OK. The node names are added and the Permissions for
Quorum window appears.
7 Select the Full Control, Change, and Read check boxes and click
OK. The Properties window appears.
8 Click Ok. The folder is shared and can be used to create virtual
machines.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 Right-click the name of the cluster you created and click More
Actions. Click Configure Cluster Quorum Settings. The Configure
Cluster Quorum Wizard window appears.
3 View the instructions on the wizard and click Next. The Select
Quorum Configuration area appears.
Note: The Before you Begin screen appears the first time you run the
wizard. You can hide this screen on subsequent uses of the wizard.
Note: Click the Node Majority option if the cluster is configured for
node majority or a single quorum resource. Click the Node and Disk
Majority option if the number of nodes is even and not part of a
multisite cluster. Click the No Majority: Disk Only option if the disk
being used is only for the quorum.
Note: You can either enter the share name or click Browse to select
the relevant shared path.
6 The details you selected are displayed. To confirm the details, click
Next. The Summary screen appears and the configuration details
of the quorum settings are displayed.
After you configure the cluster quorum, you must validate the cluster.
For more information, refer to
http://technet.microsoft.com/en-us/library/bb676379(EXCHG.80).aspx.
Configuring Storage
For any virtualization environment, storage is one of the central
barriers to implementing a good virtualization strategy. But with
Hyper-V, VM storage is kept on a Windows file system. Users can put
VMs on any file system that a Hyper-V server can access. As a result,
you can build HA into the virtualization platform and storage for the
virtual machines. This configuration can accommodate a host failure
by making storage accessible to all Hyper-V hosts so that any host can
run VMs from the same path on the shared folder. The back-end part
of this storage can be a local, storage area network, iSCSI or whatever
is available to fit the implementation.
The following table lists the minimum storage recommendations for
each VM :
Application Engine 2 80 GB
(Runtime node) Virtual
Machine
Historian Client 80 GB
Configuring Hyper-V
Microsoft® Hyper-V™ Server 2008 R2 helps in the creating of virtual
environment that improves server utilization. It enhances patching,
provisioning, management, support tools, processes, and skills.
Microsoft Hyper-V Server 2008 R2 provides live migration, cluster
shared volume support, expanded processor, and memory support for
host systems.
Hyper-V is available in x64-based versions of Windows Server 2008 R2
operating system, specifically the x64-based versions of Windows
Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and
Windows Server 2008 Datacenter.
The following are the pre-requisites to set up Hyper-V:
• x64-based processor
• Hardware-assisted virtualization
• Hardware Data Execution Prevention (DEP)
To configure Hyper-V
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
2 In the Roles Summary area, click Add Roles. The Add Roles
Wizard window appears.
Note: You can also right-click Roles and then click Add Roles Wizard
to open the Add Roles Wizard window.
3 View the instructions on the wizard , and then click Next. The
Select Server Roles area appears.
4 Select the Hyper-V check box and click Next. The Create Virtual
Networks area appears.
5 Select the check box next to the required network adapter to make
the connection available to virtual machines. Click Next. The
Confirmation Installation Selections area appears.
9 After you restart the computer, log on with the same ID and
password you used to install the Hyper V role. The installation is
completed and the Resume Configuration Wizard window appears
with the installation results.
3 Type the relevant job name and description in the Job name and
Job description boxes , and then click Create Job. The New Mirror
window appears.
After creating all the Mirroring Jobs, open the SteelEye DataKeepr UI
from the All Programs menu, click SteelEye DataKeeper MMC. The
DataKeeper window appears.
You can navigate to Server Overview under Reports to view all the
servers involved in job replication in one place.
3 View the instructions in the Before You Begin area and click Next.
The Specify Name and Location area appears.
Note: You can either type the location or click Browse to select the
location where you want to store the virtual machine.
6 Select the network to be used for the virtual machine and click
Next. The Connect Virtual Hard Disk area appears.
7 Click the Create a virtual hard disk option and then do the
following:
a In the Name box, type the name of the virtual machine.
b In the Location box, enter the location of the virtual machine.
Note: You can either type the location or click Browse to select the
location of the virtual machine.
c In the Size box, type the size of the virtual machine and then
click Next. The Installation Options area appears.
Note: You need to click either the Use an existing virtual hard disk
or the Attach a virtual hard disk later option, only if you are using an
existing virtual hard disk or you want to attach a virtual disk later.
9 Click Finish. The virtual machine is created with the details you
provided. As we have started this process from the Failover
Cluster Manager, after completing the process of creating a virtual
machine, the High Availability Wizard window appears.
10 Click View Report to view the report or click Finish to close the
High Availability Wizard window.
4 Select the volume for creating a SteelEye mirroring job and click
OK. The Selection Confirmation window appears.
5 Click OK to validate the details that you have entered. The Server
Manager window appears.
Note: To modify the selection, click Cancel and modify the detail as
required in the New DataKeeper Volume Properties window, and click
Apply.
7 Click the Dependencies tab. From Resource list, select the name
of the DataKeeper Volume resource that you created and click OK.
IO tags Historized
Virtual Node (Approx.) tags(Approx.)
Topic 0 500 14 5
Topic 1 1000 1 1
Topic 39 1000 4 4
Update
Rate Device
Topic Name Items Active Items
• Engine 1 : 9
• Engine 2 : 2
• Engine 3 : 492
• Engine 4 : 312
• Engine 5 : 507
• Engine 6 : 2
• Engine 7 : 24
• Engine 8 : 24
• Engine 9 : 250
• Engine 10: 508
• Engine 11: 506
• Engine 12: 4
• Engine 13: 22
• Engine 14: 1
• Engine 15: 1
Number of DI objects: 6
Scenario Observation
Live Migration
Data Loss
Tags Duration
Data Loss
Tags Duration
$Second 26 sec
(InTouch)
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
$Second 14 Min
(InTouch)
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
$Second 12 Min 27
(InTouch)
Note: RPO is
dependent on the
time taken by the
user to start the
InTouchView on the
InTouch node and the
RTO of the Historian
node which historizes
this tag.
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
$Second 14 min
(InTouch)
Note: RPO is
dependent on the
time taken by the
user to start the
InTouchView on the
InTouch node and the
RTO of the Historian
node, which
historizes this tag.
Data Loss
Tags Duration
Scenario Observation
"Quick Migration of
AppEngine1" on page 408
"Quick Migration of
AppEngine2" on page 412
select DateTime,tagName,vValue,Quality
,QualityDetail,wwParameters from
historian.runtime.dbo.History where TagName = '$second' and
DateTime > '2011-01-27 21:37:53.000'
2011-01-27 21:37:53.3850000$Second5301921
2011-01-27 21:37:54.3670000$Second5401921
2011-01-27 21:37:54.6180000$SecondNULL1241
2011-01-27 21:39:46.1460000$Second4502521
2011-01-27 21:39:46.3590000$Second4601921
Observations
There were no errors and warnings observed on any of the virtual
nodes.
Live Migration of GR
Trends:
In the following Historian Trend, the middle tag
SineWaveCal_001.SinewaveValue receives data from scripts on GR
and is historized.
This is also captured in the RPO table for Live Migration of GR (for
IAS tag (Script)).
Observations
GR node is Platform1 in the Galaxy deployed. During the Live
Migration of GR, it is obvious that there will be an instance during the
Live Migration when the following occurs:
• GR node will not be able to connect to the other deployed nodes
(Platforms 2, 3, 4, and 5).
• Rest of Virtual machines will not be able to connect to GR node
(Platform1)
As a result, you see the following warnings on each of the VM nodes.
GR
338704741/27/20118:23:12 PM30562192WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
338704751/27/20118:23:12 PM30562192WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
338704761/27/20118:23:12 PM30562192WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
338704771/27/20118:23:12 PM30562192WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.
338708221/27/20118:24:11 PM37803784WarningScanGroupRuntime2
Cannot convert Var data type to MxDataType
AppEngine1
AppEngine1 virtual node has SDK scripts to insert data to
Historian, so the Scripts triggered during the Live Migration
of GR node are bound to timeout.
Note: GR Node has SDK installed on it, so all the sdk scripts
triggered during the live migration of GR node will be
affected.
950760131/27/20118:11:24 PM39043908WarningScriptRuntime
Insert.InsertValuePerodic: Script timed out.
950768461/27/20118:12:14 PM39043908WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
950869671/27/20118:22:49 PM16522908WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
AppEngine2 node
The below warnings are observed on the AppEngine1 node during
the Live Migration of GR node.
54466751/27/20118:22:49 PM20882104WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
Historian
None
WIS node
30972761/27/20118:22:49 PM33002960WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
HistClient
None
Observations
AppEngine1 is Platform2 in the Galaxy deployed. During Live
Migration of the AppEngine1, it is obvious that there will be an
instance during Live Migration when
• AppEngine1 will not be able to connect to the other deployed nodes
(Platforms 3, 4, and 5).
• Other nodes will not be able to connect to AppEngine1(Platform2)
• Some data sent from AppEngine1 will be discarded till the
TimeSync utility is executed and system time of AppEngine1 is
synchronized. So Historian is bound to discard data from
AppEngine1 node
As a result, you see the following warnings on each of the VM nodes.
GR node
338163681/27/20115:52:20 PM30562192WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
AppEngine1
949341461/27/20115:53:06 PM16522908WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
949341471/27/20115:53:06 PM16522908WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
949341481/27/20115:53:06 PM16522908WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.
AppEngine2
54464381/27/20115:52:20 PM20882104WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Historian
95837141/27/20115:53:13 PM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 19017, 2011/01/27 12:22:27.608,
2011/01/27 12:23:12.991) [HISTORIAN; pipeserver.cpp; 2388]
95838831/27/20115:57:27 PM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 19017, 2011/01/27 12:22:27.608,
2011/01/27 12:23:12.991) [HISTORIAN; pipeserver.cpp; 2388;
17881]
WIS node
30972381/27/20115:52:18 PM33002960WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Trends:
IAS tags receiving data from IO Server (DAS SI Direct)
In the following Historian Trend, the first two tags receive data from
DDESuiteLinkClient in IAS from the PLC and are historized from
Platform AppEngine2.
This is also captured in the RPO table for Live Migration of
AppEngine2 (for IAS IO tag (DASSiDirect)).
In the following Historian Trend, the last tag is modified using scripts
and is historized from Platform AppEngine2. This is also captured in
the RPO table for Live Migration of AppEngine2 (for IAS tag (Script)).
Observations
AppEngine2 is Platform4 in the Galaxy deployed. During the Live
Migration of AppEngine2, it is obvious that there will be an instance
during Live Migration when the following occurs:
• AppEngine2 will not be able to connect to the other deployed nodes
(Platforms 3, 1, and 2).
• Other nodes will not be able to connect to AppEngine2(Platform4)
• Some data sent from AppEngine2 will be discarded till the
TimeSync utility is executed and system time of AppEngine2 is
synchronized. So Historian is bound to discard data from
AppEngine2 node
As a result, you see the following warnings on each of the VM nodes.
GR node
338529761/27/20117:34:30 PM30562192WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
338533811/27/20117:35:38 PM37803784WarningScanGroupRuntime2
Can't convert Var data type to MxDataType
AppEngine1
The warnings below are observed on the AppEngine1 node during the
Live Migration of AppEngine2.
950335701/27/20117:29:36 PM39043908WarningScriptRuntime
Insert.InsertValueDataChange: Script timed out.
950341791/27/20117:30:13 PM39043908WarningScriptRuntime
Insert.InsertValueWF: Script timed out.
950353481/27/20117:31:25 PM39043908WarningScriptRuntime
Insert.InsertValuePerodic: Script timed out.
950384051/27/20117:34:29 PM16522908WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
AppEngine2
54465261/27/20117:35:14 PM20882104WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
54465271/27/20117:35:14 PM20882104WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
54465281/27/20117:35:14 PM20882104WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
Historian
95851951/27/20117:35:28 PM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 7033, 2011/01/27 14:04:54.532,
2011/01/27 14:05:27.276) [HISTORIAN; pipeserver.cpp; 2388]
95852311/27/20117:39:41 PM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 7033, 2011/01/27 14:04:54.532,
2011/01/27 14:05:27.276) [HISTORIAN; pipeserver.cpp; 2388;
2872]
This is also captured in the RPO table for Live Migration of Historian
for $Second(InTouch).
This is also captured in the RPO table for Live Migration of Historian
for SysTimeSec (Historian).
Observations
When the Historian node undergoes Live Migration
• GR and AppEngine nodes are in Store Forward mode
• Historian Client trend will not be able to connect to Historian, so
warning is expected.
• Historian machine’s time is not synchronized during the Live
Migration of Historian, so “Attempt to store values in the future” is
expected.
• After Live Migration, historian’s time needs to be synchronized so
Server Shifting warning is expected.
After the Live Migration of Historian node, the stored data is
forwarded from GR and AppEngine nodes.
GR node
339084501/27/201110:08:07 PM37803312WarningaahMDASTime
synchronization with historian node out of spec, please resync
to avoid data loss. (129406198613500000, 129406198876145320)
[StorageNode.cpp,2576]
95871031/27/201110:08:17 PM50765116InfoaahDrvSvcHISTORIAN_1:
Server is too busy (buffers full)
95871041/27/201110:08:17 PM50765116InfoaahDrvSvcHISTORIAN_1:
Clearing buffer cache for recovery (buffers full)
95871051/27/201110:08:17 PM50765116InfoaahDrvSvcHISTORIAN_1:
HISTORIAN: 722 data buffers lost: header/dirty time of the
last lost buffer = 22:07:51.000/22:08:17.018
95871061/27/201110:08:19 PM14923380InfoaahManStSvcSuccessfully
processed file (name)
(C:\Historian\Data\DataImport\Manual\LateData\latedata_129406
197731664416_0000.bin) [HISTORIAN; insertmanual.cpp; 736]
AppEngine1
None
AppEngine2
None
Historian
95871021/27/201110:07:48 PM14923380WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_2, $Time, 23556, 2011/01/27
16:37:47.216, 24) [HISTORIAN; pipeserver.cpp; 1831]
WIS node
None
HistClient
1235971/27/201110:07:59 PM32321312WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
1235981/27/201110:07:59 PM32321312WarningaaTrendDataSetProvider
The configured InSQL server HISTORIAN is either Disconnnected
from Network or in ShutDown/Disable state.
Observations
There were no errors and warnings observed on any of the virtual
nodes.
Quick Migration of GR
Trends:
Observations
GR node is Platform1 in the Galaxy deployed. During the Quick
Migration of GR node it is obvious that there will be an instance
during Quick Migration when the following occurs:
• GR node will not be able to connect to the other deployed nodes
(Platforms 2, 3, 4, and 5).
• Rest of Virtual machines will not be able to connect to GR node
(Platform1)
• Since DAS SI Direct provides data to DDESuiteLinkObject and
DAS SI Direct is on GR node, the message "DAServerManager
Target node is down" is expected on AppEngine1 and AppEngine2.
• Script involving SDK script library calls timeout during the Quick
Migration of GR node.
GR Node
510877212/15/20113:11:51 PM32043224WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510877222/15/20113:11:51 PM32043224WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510877232/15/20113:11:51 PM32043224WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510877242/15/20113:11:51 PM32043224WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510877252/15/20113:11:51 PM32043224WarningNmxSvcPlatform 6
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510878202/15/20113:12:03 PM32043224WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510878492/15/20113:12:08 PM32043224WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
510880302/15/20113:12:30 PM3683932WarningScanGroupRuntime2Can't
convert Var data type to MxDataTypeaaEngine
AppEngine1
1112021172/15/20113:10:32 PM57965620WarningDAServerManager
Target node is down. DASCC will stop scanning for Server
active state.mmc
1112021502/15/20113:10:33 PM29162932WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
1112031342/15/20113:11:31 PM39923228WarningPackageManagerNet
GalaxyMonitor PollPackageServer Communication failed. A
connection attempt failed because the connected party did not
properly respond after a period of time, or established
connection failed because connected host has failed to respond
10.91.60.63:8090aaIDE
AppEngine2
55350642/15/20113:10:33 PM27002684WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
55350652/15/20113:11:33 PM16722940WarningPackageManagerNet
GalaxyMonitor PollPackageServer Communication failed. A
connection attempt failed because the connected party did not
properly respond after a period of time, or established
connection failed because connected host has failed to respond
10.91.60.63:8090aaIDE
WIS node
59743762/15/20113:10:32 PM8443788WarningNmxSvcPlatform 1 exceed
maximum heartbeats timeout of 8000 ms.NmxSvc
Observations
AppEngine1 is Platform2 in the Galaxy deployed. During the Live
Migration of AppEngine1, it is obvious that there will be an instance
during Live Migration when the following occurs:
• AppEngine1 will not be able to connect to the other deployed nodes
(Platforms 3, 4, and 5).
• Rest of Virtual machines will not be able to connect to
AppEngine1(Platform2)
• AppEngine1 is not time synchronized during the Quick Migration
and a message “Values in the past did not fit within the realtime
window” is expected on AppEngine1.
• AppEngine1 is not time synchronized during the Quick Migration
and a message “Values in the past
GR node
336481011/27/201110:20:09 AM30562192WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine1
944715761/27/201110:20:00 AM1652796WarningMessageChannel
idcinsql21 address was not resolved. Error = 10022NmxSvc
944716581/27/201110:23:38 AM16522908WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
944716591/27/201110:23:38 AM16522908WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine2
54457791/27/201110:20:08 AM20882104WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Historian
95774041/27/201110:21:01 AM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 11057, 2011/01/27 04:50:09.341,
2011/01/27 04:51:00.524) [HISTORIAN; pipeserver.cpp; 2388]
95774811/27/201110:25:12 AM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 11057, 2011/01/27 04:50:09.341,
2011/01/27 04:51:00.524) [HISTORIAN; pipeserver.cpp; 2388;
16827]
Observations
AppEnginer2 is Platform4 in the Galaxy deployed. During the Quick
Migration of AppEngine2 it is obvious that there will be an instance
during Quick Migration when
• AppEngine2 node will not be able to connect to the other deployed
nodes (Platforms 1,3, 2, and 5).
• Rest of Virtual machines will not be able to connect to AppEngine2
(Platform4)
• AppEngine2 is not time synchronized during the Quick Migration
and a message "Values in the past did not fit within the realtime
window" is expected on AppEngine2.
• AppEngine2 is not time synchronized during the Quick Migration
and a message "Values in the past did not it within the realtime
window" is expected on Historian node.
GR node
336724161/27/201111:28:47 AM30562192WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
336733851/27/201111:31:33 AM30562192WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
AppEngine1
945466161/27/201111:28:43 AM54804996WarningDAServerManager
Target node is down. DASCC will stop scanning for Server
active state.mmc
945467031/27/201111:28:47 AM16522908WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.NmxSvc
945467061/27/201111:28:47AM54804996WarningDAServerManagerTarget
node is down. DASCC will stop scanning for Server active
state.mmc
AppEngine2
54457871/27/201111:28:51 AM20882104WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
3000054458051/27/201111:31:24 AM20882104WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
Historian:
95783041/27/201111:29:43 AM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 1024, 2011/01/27 05:58:52.136,
2011/01/27 05:59:41.757) [HISTORIAN; pipeserver.cpp; 2388]
95786821/27/201111:33:55 AM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 1024, 2011/01/27 05:58:52.136,
2011/01/27 05:59:41.757) [HISTORIAN; pipeserver.cpp; 2388;
12243]
$Second(InTouch)
SysTimeSec (Historian)
GR Node
332273241/26/20115:13:38 PM42163488WarningScriptRuntime
AsynchronousBuffer.script6: Script timed out.aaEngine
AppEngine1
934165241/26/20115:13:50 PM39043908WarningScriptRuntime
Insert.InsertValueWT: Script timed out.aaEngine93417566
AppEngine2
54453561/26/20115:13:53 PM852484WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)
Historian
95646651/26/20115:12:30 PM14642120ErroraahStoreSvcERROR:
Specified time cannot be in the future (2011/01/26
11:42:29.856, 2011/01/26 11:48:39.085) [HISTORIAN;
cmanualpipe.cpp; 697]
95647171/26/20115:18:58 PM14642120WarningaahManStSvcCannot
insert values for delta tags into history block (18,
P10128.i20, 2011/01/26 11:42:22.293, 3) [HISTORIAN;
insertmanual.cpp; 4787]
95647181/26/20115:18:58 PM14642120WarningaahManStSvcCannot
insert values for delta tags into history block (18,
P10125.i20, 2011/01/26 11:42:22.293, 3) [HISTORIAN;
insertmanual.cpp; 4787]
95647191/26/20115:18:58 PM14642120ErroraahManStSvcERROR:
Processing file produced errors: file moved to support folder
(latedata_129405161297112310_0000.bin) [HISTORIAN;
insertmanual.cpp; 771]
Hist Client
1134381/26/20115:13:29 PM3232884WarningaaAFCommonTypesUnable to
create dataview: Server must be logged on before executing SQL
query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
FROM History
AppEngine1
AppEngine2
GR
Historian
SysTimeSec (Historian)
Observations
GR Node
339257881/27/201110:56:34 PM37803784WarningScanGroupRuntime2
Can't convert Var data type to MxDataType
339258091/27/201111:08:25 PM30562192WarningNmxSvcPlatform 2
exceed maximum heartbeats timeout of 8000 ms.
339258101/27/201111:08:25 PM30562192WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
339258111/27/201111:08:25 PM30562192WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
339258121/27/201111:08:25 PM30562192WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.
339258131/27/201111:08:26 PM37443748WarningScriptRuntime
AsynchronousBuffer.script2: Script timed out.
339263771/27/201111:09:53 PM37444076WarningaahMDASTime
synchronization with historian node out of spec, please resync
to avoid data loss. (129406228210600000, 129406235937946214)
[StorageNode.cpp,2576]
AppEngine1
952460831/27/201110:56:40 PM16522908WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.
952460931/27/201111:08:30 PM16522908WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
952460941/27/201111:08:30 PM16522908WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
952460951/27/201111:08:30 PM16522908WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
952461611/27/201111:09:29 PM39043908WarningScriptRuntime
Insert.InsertValueOT: Script timed out.
952595461/27/201111:23:01 PM39763940WarningaaEngine0:9C8
Snapshot write operation took longer than 10 seconds. Change
your system settings to decrease size of snapshot or stop
other applications which may affect this parameter (1, 2, 0,
0, 25796, 10) [minideltastore.cpp; 2211; 1]
952597811/27/201111:23:18 PM39443948WarningScriptRuntime
P30930.SetValues: Script timed out.
952598611/27/201111:23:31 PM39043908WarningScriptRuntime
Insert.InsertValueOF: Script timed out.
AppEngine2
54468511/27/201110:56:41 PM20882104WarningNmxSvcPlatform 5
exceed maximum heartbeats timeout of 8000 ms.
54468571/27/201110:56:45 PM20882104WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
54468641/27/201111:08:39 PM20882104WarningNmxSvcPlatform 3
exceed maximum heartbeats timeout of 8000 ms.
54470211/27/201111:22:59 PM38324072WarningaaEngine0:97C
Snapshot write operation took longer than 10 seconds. Change
your system settings to decrease size of snapshot or stop
other applications which may affect this parameter (1024, 3,
0, 0, 244, 11) [minideltastore.cpp; 2211; 1]
Historian
95878001/27/201110:56:42 PM14923380WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_2, $Time, 23556, 2011/01/27
17:26:40.479, 771) [HISTORIAN; pipeserver.cpp; 1831]
95878041/27/201110:56:58 PM14923380ErroraahStoreSvcERROR:
Specified time cannot be in the future (2011/01/27
17:26:58.457, 2011/01/27 17:39:51.164) [HISTORIAN;
cmanualpipe.cpp; 697]
95878641/27/201111:10:18 PM50765116InfoaahDrvSvcHISTORIAN_1:
HISTORIAN: 17 data buffers lost: header/dirty time of the last
lost buffer = 22:57:26.000/23:10:18.389
95879431/27/201111:10:20 PM14923380WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
95881611/27/201111:23:02 PM14923380WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (2, 1, 0,
2154928, 21081, 14) [HISTORIAN; deltastore.cpp; 2669]
95881671/27/201111:23:04 PM14923380WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (HISTORIAN_mdas, 23580, 2011/01/27 17:52:32.691,
2011/01/27 17:53:04.250) [HISTORIAN; pipeserver.cpp; 2388]
95879431/27/201111:10:20 PM14923380WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_2, $Time, 23556, 2011/01/27
17:26:40.479, 771) [HISTORIAN; pipeserver.cpp; 1831; 77980]
HistClient
1382081/27/201110:56:42 PM32322264WarningaaAFCommonTypesUnable
to create dataview: Server must be logged on before executing
SQL query: SELECT aaT=DateTime, aaN=TagName, aaDV=Value,
aaSV=CONVERT(nvarchar(512),vValue), aaQ=OPCQuality,
aaIQ=Quality, aaQD=QualityDetail, aaS=0
FROM History
1382091/27/201110:56:42 PM32322264WarningaaTrendDataSetProvider
The configured InSQL server HISTORIAN is either Disconnnected
from Network or in shutdown/Disable state
WISNode
30973991/27/201110:56:40 PM33002960WarningNmxSvcPlatform 4
exceed maximum heartbeats timeout of 8000 ms.
30974011/27/201111:09:31 PM33002960WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 8000 ms.
AppEngine1
AppEngine2
GR
Historian
$Second (InTouch)
SysTimeSec (Historian)
Observations
Error log after failover:
GR Node
394348452/2/20116:33:30 PM25082512WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
AppEngine1
1018912922/2/20116:32:54 PM29442948WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
1018914202/2/20116:33:20 PM30803084WarningScriptRuntime
Insert.InsertValuePerodic: Script timed out.aaEngine
1018914662/2/20116:33:21 PM30803084WarningScriptRuntime
Insert.InsertValueOF: Script timed out.aaEngine
1018918082/2/20116:33:38 PM30803372WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
1018918122/2/20116:33:39 PM30803372WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
1019127142/2/20116:53:52 PM26722448WarningInTouchProxyFailed to
reconnecnt to data source aaEngine
1019235332/2/20117:04:27 PM30803084WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
AppEngine2
54719382/2/20116:31:18 PM2508332WarningEnginePrimitiveRuntime
Eng:: m_GDC->GetFile failed. Error 2 (0x00000002): The system
cannot find the file specified
54721462/2/20116:32:29 PM2508332WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)
54721512/2/20116:33:29 PM2508332WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)
54721542/2/20116:34:29 PM2508332WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)
Historian
232629982/2/20116:35:09 PM14722220WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
GR.Engine.ProcessIODataBytes, 182, 2011/02/02 13:05:08.273,
1) [HISTORIAN; pipese...
232630402/2/20116:39:21 PM14722220WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
GR.Engine.ProcessIODataBytes, 182, 2011/02/02 13:05:08.273,
1) [HISTORIAN; pipese...
232662142/2/20117:04:28 PM14722220WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
applications which may affect this parameter (4, 1, 0,
9097524, 1214182, 18) [HISTORIAN; deltastore.cpp; 2669]
232662152/2/20117:04:28 PM14722220WarningaahStoreSvcSnapshot
write operation took longer than 10 seconds. Change your
system settings to decrease size of snapshot or stop other
AppEngine1
AppEngine2
Historian
$Second (Intouch)
Observations
Error log after failover
GR Node
393988582/2/20114:46:34 PM26602664WarningEnginePrimitiveRuntime
Eng:: m_GDC->GetFile failed. Error 2 (0x00000002): The system
cannot find the file specifiedaaEngine
393989522/2/20114:47:31 PM26602664WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
AppEngine1
1017936492/2/20114:46:34 PM29522956WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
1017938352/2/20114:47:05 PM27002184WarningScriptRuntime
Insert.InsertValueDataChange: Script timed out.aaEngine
1017940012/2/20114:47:32 PM27003592WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
1017940022/2/20114:47:33 PM27003592WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
1017940192/2/20114:48:26 PM27004048WarningScriptRuntime
InsertAsync.InsertValueWF: Script timed out.aaEngine
1017940322/2/20114:48:26 PM27002184WarningScriptRuntime
Insert.InsertValueWF: Script timed out.aaEngine
1018040142/2/20114:58:02 PM20082652WarningInTouchProxyFailed to
reconnecnt to data source aaEngine
1018047432/2/20114:58:43 PM20082652WarningInTouchProxyFailed to
reconnecnt to data source aaEngine
AppEngine2
54716472/2/20114:46:07 PM21721972WarningScriptRuntime
HourVal_003.GenerateHourValue: Script timed out.aaEngine
54716942/2/20114:46:29 PM2480672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
54717192/2/20114:49:29 PM2480672WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
Historian
232207772/2/20114:33:29 PM15162260WarningaahStoreSvcValues in
the past did not fit within the realtime window; discarding
data (historian_2, 23987, 2011/02/02 10:49:20.680, 2011/02/02
10:59:15.351) [HISTORIAN; pipeserver.cpp; 2388; 535]aahCfgSvc
232400692/2/20114:49:41 PM15202800WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
IncrementVal_003.IncrementValue, 12307, 2011/02/02
11:19:37.535, 1) [HISTORIAN; p...aahCfgSvc
232401232/2/20114:53:51 PM15202800WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
AppEngine1
2/4/20114:41:11 PMWarningaaEngine0:A50 Snapshot write operation
took longer than 10 seconds. Change your system settings to
decrease size of snapshot or stop other applications which may
affect this parameter (1024, 3, 0, 0, 7316, 11)
[minideltastore.cpp; 2211; 1]
AppEngine2
2/4/20114:56:49 PMWarningaaEngine0:A08 Snapshot write operation
took longer than 10 seconds. Change your system settings to
decrease size of snapshot or stop other applications which may
affect this parameter (4, 6, 0, 0, 3288, 11)
[minideltastore.cpp; 2211; 1]
GR Node
2/4/20114:50:54 PMWarningaaEngine0:B34 Snapshot write operation
took longer than 10 seconds. Change your system settings to
decrease size of snapshot or stop other applications which may
affect this parameter (1, 2, 0, 0, 301, 10)
[minideltastore.cpp; 2211; 1]
2/4/20114:52:12 PMWarningScriptRuntime
AsynchronousBuffer.script6: Script timed out.
Historian Node
2/4/20114:15:22 PMWarningaahStoreSvcValues in the past did not
fit within the realtime window; discarding data
(HISTORIAN_mdas, 182, 2011/02/04 10:40:24.919, 2011/02/04
10:41:07.901) [HISTORIAN; pipeserver.cpp; 2388; 4836]
WIS Node
2/4/20114:47:02 PMWarningArchestrA.Visualization.WCFServiceLMX
WCF service: Exception while PushBackData(sync mode): [The
operation has timed out.] at [
at
System.ServiceModel.Channels.InputQueue`1.Dequeue(TimeSpan
timeout)
at
System.ServiceModel.Channels.ServicePollingDuplexSessionCh...
From the above trend, it is observed that there is data loss for 33
seconds at a stretch IAS Script tags on GR node. It can also be noticed
that there are frequent data losses for all types tags IO and Script
generated from GR and AppEngine nodes.
WIS node
2/4/20113:12:32 PMWarningrdbhandler
!RdbHandlerListerner::OnData() - Invalid Session (ID:3801155)
From the above trend, it is observed that there is data loss for IAS IO
tags and Script tags on GR node and AppEngine Nodes.
Chapter 4
• As per the topology described above for the High Availability and
Disaster Recovery environment, only one network is used for all
communications. If multiple networks are used, then make sure
only the primary network used for communication between the
Hyper-V Nodes is enabled for the Failover Cluster
Communication. Disable the remaining cluster networks in
Failover Cluster Manager.
• Ensure the virtual networks created in Hyper-V Manager have the
same name across all the nodes, which are participating in the
cluster. Otherwise, migration/failover of Hyper-V virtual machines
will fail.
• Though this is a three-node failover topology, to achieve the
required failover order, a fourth node is required for setting up the
Node Majority in the failover cluster. The three nodes are used for
virtual machine services and the fourth node is used for Quorum
witness. The fourth node is not meant for failover of Hyper-V
virtual machines running on the cluster. This fourth node should
not be marked as the preferred owner while setting up the
preferred owners for the Hyper-V virtual machines running on the
cluster.
The following scenario is an explanation for the failover order.
Node 1 and Node 2 are in High Available site and Node 3 is in
Disaster site. The failover sequence is Node 1 > Node 2 > Node 3.
• When all VMs are running on Node 1:
• All three nodes are up. Now Node 1 goes down. The VMs
running on Node 1 move to Node 2.
• Node 1 and Node 3 are up and Node 2 is down. Now Node 1
goes down. The VMs running on Node 1 move to Node 3.
• When all VMs are running on Node 2:
• Node 2 and Node 3 are up and Node 1 is down. Now Node 2
goes down. The VMs running on Node 2 move to Node 3.
• All three nodes are up. Now Node 2 goes down. The VMs
running on Node 2 move to Node 3.
The following table lists the impact on CPU utilization and bandwidth
at various compression levels.
• Medium Configuration Load: Approximately 50000 IO Points
with approximately 20000 attributes being historized.
• Network: Bandwidth controller with bandwidth is 45Mbps and
no latency.
These are readings when mirroring is continuously happening
between the source and the destination storage SANs, when all the
VMs are running on the source host server. The data captured shows
that the % CPU utilization of the SteelEye DataKeeper mirroring
process increases with increasing compression levels. Based on these
findings, you are recommended to use Compression Level 2 in the
Medium Scale Virtualization environment.
% Processor
Time
(ExtMirrSvc) -
SteelEye
DataKeeper % Processor
Mirroring Time (CPU) -
process Overall CPU Total Bytes/Sec
Historian
• In case of Live and Quick migration of Historian, you may
notice that Historian logs values with quality detail 448 and
there may be values logged twice with same timestamps. This
is because the suspended Historian VM starts on the other
cluster node with the system time it was suspended at before
the migration. As a result, some of the data points it is
receiving with the current time seem to be in the future to the
Historian. This results in Historian modifying the timestamps
to its system time and updating the QD to 448. This happens
until the system time of the Historian node catches up with the
real current time using the TimeSync utility, after which the
problem goes away. So, it is recommended to stop the historian
before the migration and restart it after the VM is migrated
and its system time is synced up with the current time.
• Live and Quick migration of Historian should not be done when
the block change over is in progress on the Historian node.
• If a failover happens (for example, due to a network disconnect
on the source host Virtualization Server) while the Historian
status is still “Starting”, the Historian node fails over to the
target host Virtualization Server. In the target host, Historian
fails to start. To recover from this state, kill the Historian
services that failed to start and then start the Historian by
launching the SMC.
InTouch
• Ensure that InTouch Window Viewer is added to the Start Up
programs so that the view is started automatically when the
virtual machine reboots.
Application Server
• If a failover happens (for example, due to a network disconnect
on the source host Virtualization Server) while the Galaxy
Migration is in progress, the GR node fails over to the target
host Virtualization Server. In the target host, on opening the
IDE for the galaxy, the templates do not appear in the
Template toolbox and in Graphic toolbox. To recover from this
state, delete the galaxy and create new Galaxy. Initiate the
migration process once again.
• If a failover happens (for example, due to an abrupt power-off
on the source host Virtualization Server) while a platform
deploy is in progress, the Platform node fails over to the target
host Virtualization Server. In the target host, some objects will
be in deployed state and the rest will be in undeployed state. To
recover from this state, redeploy the whole Platform once
again.
• If a failover happens (for example, due to an abrupt power-off
on the source host Virtualization Server) while a platform
undeploy is in progress, the Platform node fails over to the
target host Virtualization Server. In the target host, some
objects will be in undeployed state and the rest will be in
deployed state. To recover from this state, undeploy the whole
Platform once again.
Hyper-V Hosts
Memory 48 GB
Note: For the Hyper-V Host to function optimally, the server should
have the same processor, RAM, storage, and service pack level.
Preferably, the servers should be purchased in pairs to avoid hardware
discrepancies. Though the differences are supported, it impacts the
performance during failovers.
Virtual Machines
Using the Hyper-V host specified above, seven virtual machines can be
created in the environment with the configuration given below.
Memory 8 GB
Storage 200 GB
Memory 8 GB
Storage 100 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Memory 4 GB
Storage 80 GB
Network Requirements
For this architecture, you can use one physical network card that
needs to be installed on a host computer for both the domain and the
process networks.
The following process will guide you on how to setup high availability
and disaster recovery for medium scale virtualization environment.
This setup requires a minimum of three host servers and two storage
servers with sufficient disk space to host the virtual machines on each
disk. One storage server is shared across two servers on one site and
another storage server is connected to the third host. Each disk
created on the storage server is replicated in all the sites for disaster
recovery. Node 4 is used for Node Majority in the failover cluster.
Another independent node is used for configuring the quorum. For
more information on configuring the quorum, refer to "Configuring
Cluster Quorum Settings" on page 468.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
Note: You can either enter the server name or click Browse and select
the relevant server name.
Note: You can add one or more server names. To remove a server
from the Selected servers list, select the server and click Remove.
5 Click the Run only the tests I select option to skip the storage
validation process, and click Next. The Test Selection screen
appears.
Note: Click the Run all tests (recommended) option to validate the
default selection of tests.
6 Clear the Storage check box, and then click Next. The Summary
screen appears.
7 Click View Report to view the test results or click Finish to close
the Validate a Configuration Wizard window.
A warning message appears indicating that all the tests have not been
run. This usually happens in a multisite cluster where the storage
tests are skipped. You can proceed if there is no other error message. If
the report indicates any other error, you need to fix the problem and
re-run the tests before you continue. You can view the results of the
tests after you close the wizard in
SystemRoot\Cluster\Reports\Validation Report date and time.html
where SystemRoot is the folder in which the operating system is
installed (for example, C:\Windows).
To know more about cluster validation tests, click More about cluster
validation tests on Validate a Configuration Wizard window.
Creating a Cluster
To create a cluster, you need to run the Create Cluster wizard.
To create a cluster
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
Note: If the User Account Control dialog box appears, confirm the
action you want to perform and click Yes.
Note: You can either enter the server name or click Browse and select
the relevant server name.
7 In the Cluster Name field, type the name of the cluster and click
Next. The Confirmation area appears.
8 Click Next. The cluster is created and the Summary area appears.
9 Click View Report to view the cluster report created by the wizard
or click Finish to close the Create Cluster Wizard window.
To create and secure a file share for the node and file share
majority quorum
1 Create a new folder on the system that will host the share
directory.
2 Right-click the folder that you have created and click Properties.
The Quorum Properties window for the folder you created
appears.
3 Click the Sharing tab, and then click Advanced Sharing. The
Advanced Sharing window appears.
4 Select the Share this folder check box and click Permissions. The
Permissions for Quorum window appears.
6 In the Enter the object name to select box, enter the four node
names used for the cluster in the high availabilty and disaster
recovery configuration and click OK. The node names are added
and the Permissions for Quorum window appears.
7 Select the Full Control, Change, and Read check boxes and click
OK. The Properties window appears.
8 Click Ok. The folder is shared and can be used to create virtual
machines.
Note: You can also access the Server Manager window from the
Administrative Tools window or the Start menu.
2 Right-click the name of the cluster you have created and click More
Actions. Click Configure Cluster Quorum Settings. The Configure
Cluster Quorum Wizard window appears.
3 View the instructions on the wizard and click Next. The Select
Quorum Configuration area appears.
Note: The Before you Begin screen appears the first time you run the
wizard. You can hide this screen on subsequent uses of the wizard.
Note: Click the Node Majority option if the cluster is configured for
node majority or a single quorum resource. Click the Node and Disk
Majority option if the number of nodes is even and not part of a
multisite cluster. Click the No Majority: Disk Only option if the disk is
being used only for the quorum.
Note: You can either enter the server name or click Browse to select
the relevant shared path.
6 The details you have selected are displayed. To confirm the details,
click Next. The Summary screen appears and the configuration
details of the quorum settings are displayed.
After you configure the cluster quorum, you must validate the cluster.
For more information, refer to
http://technet.microsoft.com/en-us/library/bb676379(EXCHG.80).aspx.
Configuring Storage
For any virtualization environment, storage is one of the central
barriers to implementing a good virtualization strategy. However in
Hyper-V, VM storage is kept on a Windows file system. You can put
VMs on any file system that a Hyper-V server can access. As a result,
you can build HA into the virtualization platform and storage for the
virtual machines. This configuration can accommodate a host failure
by making storage accessible to all Hyper-V hosts so that any host can
run VMs from the same path on the shared folder. The back-end of this
storage can be a local, storage area network, iSCSI or whatever is
available to fit the implementation.
The following table lists the minimum storage recommendations for
each VM in medium scale virtualization environment:
Application Engine 2 80 GB
(Runtime Node) Virtual
Machine
Historian Client 80 GB
Configuring Hyper-V
Microsoft Hyper-V Server 2008 R2 helps in creating virtual
environment that improves server utilization. It enhances patching,
provisioning, management, support tools, processes, and skills.
Microsoft Hyper-V Server 2008 R2 provides live migration, cluster
shared volume support, expanded processor, and memory support for
host systems.
Hyper-V is available in x64-based versions of Windows Server 2008 R2
operating system, specifically the x64-based versions of Windows
Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and
Windows Server 2008 Datacenter.
The following are the pre-requisites to set up Hyper-V:
• x64-based processor
• Hardware-assisted virtualization
• Hardware Data Execution Prevention (DEP)
To configure Hyper-V
1 Click the Server Manager icon on the toolbar. The Server Manager
window appears.
2 In the Roles Summary area, click Add Roles. The Add Roles
Wizard window appears.
Note: You can also right-click Roles, and then click Add Roles Wizard
to open the Add Roles Wizard window.
3 Read the instructions on the wizard and then click Next. The
Select Server Roles area appears.
4 Select the Hyper-V check box and click Next. The Create Virtual
Networks area appears.
5 Select the check box next to the required network adapter to make
the connection available to virtual machines. Click Next. The
Confirmation Installation Selections area appears.
9 After you restart the computer, login with the same ID and
password you used to install the Hyper-V role. The installation is
completed and the Resume Configuration Wizard window appears
with the installation results.
3 Type the relevant job name and description in the Job name and
Job description boxes, and then click Create Job. The New Mirror
window appears.
After creating all the Mirroring Jobs, open the SteelEye DataKeepr UI
from the All Programs menu, click SteelEye DataKeeper MMC. The
DataKeeper window appears.
You can navigate to Job Overview under Reports to view all the jobs
in one place.
You can navigate to Server Overview under Reports to view all the
servers involved in job replication in one place.
3 View the instructions in the Before You Begin area and click Next.
The Specify Name and Location area appears.
Note: You can either type the location or click Browse and select the
location where you want to store the virtual machine.
6 Select the network to be used for the virtual machine from the
Connection drop-down list, and click Next. The Connect Virtual
Hard Disk area appears.
7 Click the Create a virtual hard disk option and then do the
following:
a In the Name field, type the name of the virtual machine.
b In the Location field, enter the location of the virtual machine.
Note: You can either type the location or click Browse and select the
location of the virtual machine.
c In the Size field, type the size of the virtual machine, and then
click Next. The Installation Options area appears.
Note: You need to click the Use an existing virtual hard disk or the
Attach a virtual hard disk later option, only if you are using an
existing virtual hard disk or you want to attach a virtual disk later.
8 Click Install an operating system later option and click Next. The
Completing the New Virtual Machine Window area appears.
9 Click Finish. The virtual machine is created with the details you
have provided. As we have started this process from the Failover
Cluster Manager, after completing the process of creating a virtual
machine, the High Availability Wizard window appears.
10 Click View Report to view the report or click Finish to close the
High Availability Wizard window.
4 Select the volume for creating a disk monitoring job and click OK.
The Selection Confirmation window appears.
5 Click OK to validate the details that you have entered. The Server
Manager window appears.
Note: To modify the selection, click Cancel and modify the detail as
required in the New DataKeeper Volume Properties window, and then
click Apply.
7 Click the Dependencies tab. From Resource list, select the name
of the DataKeeper Volume resource that you have created and
then click OK.
Note: You can use the above procedure to create multiple virtual
machines with appropriate names and configuration.
Scenario Observation
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
Data Loss
Tags Duration
Scenario Observation
AppEngine1
AppEngine2
GR
Historian
$Second (InTouch)
SysTimeSec (Historian)
Observations
When the host server is turned off, all the VMs will shut down
abruptly and restart after moving to the other host server.
• GR Node, AppEngine1, and AppEngine2 virtual nodes have SDK
scripts to insert data to Historian. The scripts triggered during the
power-off of host server also results in script timeout on
AppEngine1, AppEngine2, and GR Nodes.
• As Historian will not be available to the other nodes, suddenly a
network-related or instance-specific error can occur while
establishing a connection to the SQL Server and failing to connect
to Historian.
GR Node
446553042/8/20115:22:30 PM29122916WarningEnginePrimitiveRuntime
Eng:: m_GDC->GetFile failed. Error 2 (0x00000002): The system
cannot find the file specifiedaaEngine
446553782/8/20115:22:44 PM29122916WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
AppEngine1
1055862412/8/20115:21:51 PM28962900WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
1055865962/8/20115:22:35 PM31122756WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
1055865972/8/20115:22:35 PM31122756WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
AppEngine2
54976222/8/20115:21:19 PM26842676WarningScriptRuntime
HourVal_003.GenerateHourValue: Script timed out.aaEngine
54976442/8/20115:21:37 PM1540496WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
54976522/8/20115:22:37 PM1540496WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
54976532/8/20115:23:38 PM1540496WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
54976542/8/20115:24:38 PM1540496WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
54976612/8/20115:25:38 PM1540496WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
Historian
None
AppEngine1
AppEngine2
GR
Historian
$Second (InTouch)
SysTimeSec (Historian)
Observations
When the host server is disconnected from the network, all the nodes
shut down abruptly and restart after moving to the other host server,
the following occurs:
• GR Node, AppEngine1, and AppEngine2 virtual nodes have SDK
scripts to insert data to Historian. The scripts triggered during the
time the host server is turned off also results in script timeout on
AppEngine1, AppEngine2, and GR Nodes.
• As Historian is not available to the other nodes, suddenly a
network-related or instance-specific error can occur while
establishing a connection to the SQL Server and failing to connect
to Historian.
• Some data sent from GR Node, AppEngine1, and AppEngine2 is
discarded till the TimeSync utility is executed and the system time
of Historian is synchronized.
GR Node
448592442/11/20119:00:06 AM27922940WarningNmxSvcInitial
connection packet received from platform 3, which is not in
platforms table.NmxSvc
448597652/11/20119:01:27 AM28482852WarningScriptRuntime
GR.privatembytes: Script timed out.aaEngine
448626762/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
448626772/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
448626782/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
448626792/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
448626972/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
448626982/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine
448626992/11/20119:06:00 AM37163720WarningScanGroupRuntime2
Can't convert Var data type to MxDataTypeaaEngine44862697
2/11/20119:06:00 AM37163720WarningScanGroupRuntime2Can't
convert Var data type to MxDataTypeaaEngine
AppEngine 1
1059130762/11/20118:59:15 AM28602864Warning
EnginePrimitiveRuntimeEng:: m_GDC->GetFile failed. Error 2
(0x00000002): The system cannot find the file specified
aaEngine
1059131592/11/20119:00:30 AM28602864WarningScriptRuntime
AppEngineNode1.privatembytes: Script timed out.aaEngine
1059135022/11/20119:01:15 AM30324132WarningDCMConnectionMgr
Open() of DCMConnection failed: 'A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
1059135032/11/20119:01:15 AM30324132WarningSQLDataRuntime3
SQLTestResults Command Failure - A network-related or
instance-specific error occurred while establishing a
connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that
SQL Server is configured...aaEngine
AppEngine2
55019182/11/20119:00:03 AM17241776WarningScriptRuntime
HourVal_003.GenerateHourValue: Script timed out.aaEngine
55019542/11/20119:00:16 AM26642808WarningNmxSvcPlatform 1
exceed maximum heartbeats timeout of 7500 ms.NmxSvc
55019742/11/20119:00:28 AM24882484WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55019752/11/20119:01:29 AM24882484WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55019762/11/20119:02:30 AM24882484WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
55019832/11/20119:03:30 AM24882484WarningHistorianPrimitive
AppEngineNode2 failed to connect to Historian HISTORIAN. (Does
ArchestrA user account exist on Historian(InSQL) node or does
Historian not exist?)aaEngine
Historian
235194832/11/20119:05:27 AM30483692WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
SlowScanUDO.IncrementValue, 334, 2011/02/11 03:35:27.296, 1)
[HISTORIAN; pipeserv...aahCfgSvc
235195792/11/20119:09:42 AM30483692WarningaahStoreSvcAttempt to
store values in the future; timestamps were overwritten with
current time (pipe name, TagName, wwTagKey, system time (UTC),
time difference (sec)) (HISTORIAN_mdas,
SlowScanUDO.IncrementValue, 334, 2011/02/11 03:35:27.296, 1)
[HISTORIAN; pipeserv...aahCfgSvc
Chapter 5
This chapter describes how to use the features of Windows Server 2008
R2 to perform the following functions:
• Using VLAN for Communication Between System Platform Nodes
• Using VLAN for RMC Communication Between Redundant
Application Server Nodes
• Accessing a System Platform Node with a Remote Desktop
• Accessing System Platform Applications as Remote Applications
• Displaying the System Platform Nodes on a Multi-Monitor with a
Remote Desktop
• Working with Network Load Balancing
• Hardware Licenses in a Virtualized Environment
Note: A virtual network works like a physical network except that the
switch is software based. After an external virtual network is
configured, all networking traffic is routed through the virtual switch.
Note: Do not select this check box if you are creating a virtual
network for communication between VM nodes and a plant network.
2 Shut down the VM node to which you want to add the network
adapter.
Right-click the required VM node. The VM menu appears.
Note: All traffic for the management operating system that goes
through the network adapter is tagged with the VLAN ID you enter.
2 Shut down the VM node to which you want to add the network
adapter.
Right-click the required VM node. The VM menu appears.
Note: You must provide the same VLAN ID that you provided for the
first VM node you configured.
Note: In the Settings window, enter the same VLAN ID that you
entered while configuring the InTouch and Historian Client nodes. This
enables the VM nodes to communicate internally over the specified LAN
ID.
Note: In the Settings window, enter the same VLAN ID you entered
on both the Application Server nodes for virtual network adapter. This
enables the Application Server VM node to communicate internally over
the specified LAN ID as an RMC channel to communicate to another
Application Server VM node.
d Enter the IP address for the network adapter, and then click
OK.
b Select the users you want to allow access to, click Add, and
then click OK to close the window.
• The client node and the Remote Desktop Session Host server
should be able to communicate.
c Select the Remote Desktop Services check box, and then click
Next. The Remote Desktop Services screen appears.
Note: These are the role services that are being installed in this
procedure. You can select other role services, as required.
3 Add users.
a Click the User Assignment tab.
b Click the Specified domain users and domain groups option.
c Click Add. The Select Users, Computers, or Groups window
appears.
You need to prepare another node where Remote Desktop role service
is installed and Remote Desktop Connection Broker service is enabled.
For more information, refer to "Installing and Configuring the Remote
Desktop Web Access Role Service at a Remote Desktop Session Host
Server Node" on page 543
f Select the Computers check box, and then click OK to close the
window. The Select Users, Computers, or Groups window
appears.
g In the Enter the object names to select box, enter the
computer account of the Remote Desktop Web Access server,
and then click OK.
h Click OK to close the TS Web Access Computers Properties
dialog box.
5 Add the client node name in TS Web Access Computers security
group on the Remote Desktop Connection Broker Server name.
Follow steps a to h of point 4 to add the client name.
Alarm Suite
History
Migration
InTouch
Window Maker
Window Viewer
Note: Windows Server 2003 does not support RDP 7.0. To use
Windows XP, the client machine must be updated with RDP 7.0.
After the client machine is prepared, you can display the system
platform on a multi-monitor with a remote desktop.
To display the system platform nodes on a multi-monitor
with a remote desktop
1 Ensure that the client machine is able to detect plugged-in
secondary monitors. On the Start menu, click Control Panel,
Display, Change Display Settings, then Detect. This ensures that
all plugged-in monitors are detected.
2 Modify the display settings.
a On the Control Panel window, click Display Change, then
Display settings. The Change the appearance of your
displays area appears.
Note: If the client machine does not have RDP 7.0, this option will not
be available to you.
b Launch the IOM product and test the application. Drag and
drop to move the application between the different monitors.
d Select the Remote Desktop Services check box, and then click
Next. The Remote Desktop Services screen appears.
Note: There are two types of Windows Client Access Licenses from
which to choose: device-based or user-based, also known as Windows
Device CALs or Windows User CALs. This means you can choose to
acquire a Windows CAL for every device (used by any user) accessing
your servers, or you can choose to acquire a Windows CAL for every
named user accessing your servers (from any device).
To install an NLB
1 Open the Server Manager window.
Click Start, point to Administrative Tools, and then click Server
Manager. The Server Manager window appears.
c Select the Network Load Balancing check box, and then click
Next. The Confirm Installation Selections screen appears.
e Select the Computers check box, and then click OK. The node
names of the computer appear in the Select Users, Computers,
or Groups window.
Note: You can also use the default port rules to create an NLB cluster.
b In the Host box, enter the name of the host (node 1), and then
click Connect.
c Under Interfaces available for configuring a new cluster,
select the interface to be used with the cluster, and then click
Next. The Host Parameters section in the New Cluster window
appears.
Note: The value in the Priority box is the unique ID for each host. The
host with the lowest numerical priority among the current members of
the cluster handles the entire cluster's network traffic that is not
covered by a port rule. You can override these priorities or provide load
balancing for specific ranges of ports by specifying the rules on the Port
rules tab of the Network Load Balancing Properties window.
e In the Full Internet name box, enter the name of the new
cluster.
f Click the Multicast option, and then click Next.The Port Rules
section in the New Cluster window appears.
Note: If you click the Unicast option, NLB instructs the driver that
belongs to the cluster adapter to override the adapter's unique, built-in
network address and change its MAC address to the cluster's MAC
address. Nodes in the cluster can communicate with addresses outside
the cluster subnet. However, no communication occurs between the
nodes in the cluster subnet.
Note: If you click the Multicast option, both network adapter and
cluster MAC addresses are enabled. Nodes within the cluster are able to
communicate with each other within the cluster subnet, and also with
addresses outside the subnet.
g Click Finish to create the cluster and close the window. The
Network Load Balancing Manager window appears.
b In the Host box, enter the name of node 2, then click Connect.
c Under Interfaces available for configuring a new cluster,
select the interface to be used with the cluster, and then click
Next. The Host Parameters section in the New Cluster window
appears.
d In the Priority box, enter the required value, and then click
Next. The Port Rules section of the Add Host to Cluster
window appears.
e Click Finish to add the host and close the window. The
Network Load Balancing Manager window appears.
4 Select the users you want to allow access to, click Add, and then
click OK to close the window.
Note: The users can be local users and need not be domain
users/administrators. If the users are local users they should be added
on both the NLB cluster nodes with same user name and password.
2 Edit settings.
a In the Edit settings area, under Remote Desktop Connection
Broker, double-click Member of farm in RD Connection
Broker. The Properties window appears.
Note: By assigning a relative weight value, you can distribute the load
between more powerful and less powerful servers in the farm. By
default, the weight of each server is “100”. You can modify this value,
as required.
Note: Repeat this procedure on node 2. Ensure that you enter the
same details in each step for node2 as you did for node 1. In Farm
Name box, enter the same Farm Name used while configuring the node
1.
b In the Group Name box, enter the name of the group, and then
click OK to close the window.
Note: The group name need not be the same as the cluster name.
You can now select the newly-created group name in the left pane
andview the sessions connected to each node of the cluster.
Note: You can either type or click Browse to select the required node
name.
You can now select the newly-created group name in the left pane and
view the sessions connected to each node of the cluster.
Chapter 6
SCVMM Features
Virtual Machine and Host Management
This feature is used to create and manage virtual machines. If you add
a host running Windows Server 2008, which is not Hyper-V enabled,
SCVMM 2008 automatically enables the Hyper-V role on the host.
Intelligent Placement
When a virtual machine is deployed, SCVMM 2008 analyzes
performance data and resource requirements for both the workload
and the host. By using this analysis, you can modify placement
algorithms to get customized deployment recommendations.
Library Management
The SCVMM library contains file-based resources and hardware
profiles that you can use to create standardized virtual machines.
Physical to Virtual (P2V) and Virtual to Virtual (V2V) Conversion
SCVMM 2008 helps improve the P2V experience by integrating the
P2V conversion process and using the Volume Shadow Copy Service
(VSS) of Windows Server.
SCVMM 2008 also provides a wizard that converts VMware virtual
machines to virtual hard disks (VHDs) through an easy and speedy
V2V transfer process.
Existing Storage Area Network (SAN)
Virtual machine images are often very large and are slow to move
across a local area network (LAN). You can configure SCVMM 2008 to
use the application in an environment that has a fiber channel or a
SAN, so that you can perform SAN transfers within SCVMM.
After VMM 2008 is configured, the application automatically detects
and uses an existing SAN infrastructure to transfer virtual machine
files. This transfer facilitates the movement of large virtual machine
files at the fastest possible speed, and reduces the impact on LAN.
Virtual Machine Self-Service Portal
You can designate self-service to users and grant them controlled
access to specific virtual machines, templates, and other SCVMM 2008
resources through a Web-based portal. This controlled access helps
users, such as testers and developers, to allot new virtual machines to
themselves. The users can allot the virtual machines according to the
controls you set by using the self-service policies.
• Another VM image
SCVMM allows you to copy existing virtual machines and create
Hyper-V virtual machines.
A V2V conversion process converts virtual machines to VHDs. You
can use a V2V conversion to convert either an entire virtual
machine or its disk image file to the Microsoft virtual machine
format.
To perform a V2V conversion
a Add the host server-based virtual machine files to a SCVMM
library.
b Select the Convert Virtual Machine option in the Library view
in the SCVMM administrator console.
For more information, refer to "Preparing a Virtual Image from
Another Virtual Image" on page 658.
• Ghost backup
You can create VMs from images supported by third-party vendors,
such as Norton (Norton Ghost).
SCVMM allows you to create a virtual machine using VHD images.
The VHD images are created using a ghost backup.
To create a virtual machine from a ghost backup
a Create a ghost backup (.GHO).
b Convert a ghost backup (.GHO) to a virtual hard disk (.VHD).
c Create a virtual machine from .VHD.
For more information, refer to "Preparing a Virtual Image from a
Ghost Backup" on page 676.
For more information on creating VMs, refer to
http://technet.microsoft.com/en-us/library/cc764227.aspx.
The following sections describe how to create virtual images using
SCVMM.
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM Server configuration, if required.
5 Select the source machine or hard disk you want to use for the new
VM.
On the Select Source screen, click the Create the new virtual
machine with a blank virtual hard disk option, and then click
Next. The Virtual Machine Identity screen appears.
Note: You can either type or click Browse to select the relevant owner
name.
b In the Number of CPUs and CPU type lists, click the relevant
details, and then click Memory. The Memory area appears.
d Click the Existing image file option, and then click Browse to
select the required ISO image file from the Select ISO window.
The file name appears in the Existing image file box.
f Click the Place the Virtual Machine on a host option, and then
click Next. The Select Host screen appears.
Note: All hosts that are available for placement are given a rating of
0 to 5 stars based on their suitability to host the virtual machine. The
ratings are based on the hardware, resource requirements, and
expected resource usage of the virtual machine. The ratings are also
based on placement settings that you can customize for the VMM or
for individual virtual machine deployments. However, the ratings are
recommendations. You can select any host that has the required disk
space and memory available.
Important: In SCVMM 2008 R2, the host ratings that appear first
are based on a preliminary evaluation by SCVMM. The ratings are for
the hosts that run Windows Server 2008 R2 or ESX Server. Click a
host to view the host rating based on a more thorough evaluation.
This tab displays the status of the host and lists the virtual
machines that are currently deployed on it.
• Ratings Explanation
This tab lists the conditions that cause a host to receive a zero
rating.
• SAN Explanation
Note: This path refers to the drives that are free to allocate the host
machine. One drive is allocated to one virtual machine. You can either
type or click Browse to select the relevant path.
c From the Specify the operating system you will install in the
virtual machine list, select the operating system based on the
ISO selected, and then click Next. The Summary screen
appears.
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM server configuration, if required.
3 Select the source machine or hard disk you want to use for the new
VM.
On the Select Source screen, click the Create the new virtual
machine with a blank virtual hard disk option, and then click
Next. The Virtual Machine Identity screen appears.
Note: You can either type or click Browse to select the relevant owner
name.
b In the Number of CPUs and CPU type lists, click the relevant
details, and then click Memory. The Memory area appears.
d Click the Physical CD/DVD drive option, and then click Next.
The Select Destination screen appears.
Note: You need to insert the bootable OS in the CD/DVD drive of the
host server machine when you need to create the virtual machine.
e Click the Place the virtual machine on a host option, and then
click Next. The Select Host screen appears.
Note: All hosts that are available for placement are given a rating of
0 to 5 stars based on their suitability to host the virtual machine. The
ratings are based on the hardware, resource requirements, and
expected resource usage of the virtual machine. The ratings are also
based on placement settings that you can customize for the VMM or
for individual virtual machine deployments. However, the ratings are
recommendations. You can select any host that has the required disk
space and memory available.
Important: In SCVMM 2008 R2, the host ratings that appear first
are based on a preliminary evaluation by SCVMM. The ratings are for
the hosts that run Windows Server 2008 R2 or ESX Server. Click a
host to view the host rating based on a more thorough evaluation.
This tab displays the status of the host and lists the virtual
machines that are currently deployed on it.
• Ratings Explanation
This tab lists the conditions that cause a host to receive a zero
rating.
• SAN Explanation
Note: This path refers to the drives which are free to allocate the host
machine. One drive is allocated to one virtual machine. You can either
type or click Browse to select the relevant path.
c From the Specify the operating system you will install in the
virtual machine list, select the operating system based on the
ISO selected, and then click Next. The Summary screen
appears.
Note: Since SCVMM uses HTTP and WMI services, ensure that WMI
service is running and a firewall is not blocking HTTP and WMI traffic at
the source machine.
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM server configuration, if required.
Note: You can either type the computer name or click Browse to
select the required computer name.
Note: You can either type or click Browse to select the relevant owner
name.
7 Specify the number of processors and memory for the new VM.
Select the required figures from the Number of processors and
Memory boxes, and then click Next. The Select Host screen
appears.
Note: All hosts that are available for placement are given a rating of
0 to 5 stars based on their suitability to host the virtual machine. The
ratings are based on the hardware, resource requirements, and
expected resource usage of the virtual machine. The ratings are also
based on placement settings that you can customize for the VMM or
for individual virtual machine deployments. However, the ratings are
recommendations. You can select any host that has the required disk
space and memory available.
Important: In SCVMM 2008 R2, the host ratings that appear first
are based on a preliminary evaluation by SCVMM. The ratings are for
the hosts that run Windows Server 2008 R2 or ESX Server. Click a
host to view the host rating based on a more thorough evaluation.
This tab displays the status of the host and lists the virtual
machines that are currently deployed on it.
• Ratings Explanation
This tab lists the conditions that cause a host to receive a zero
rating.
• SAN Explanation
Note: If no host in the list has sufficient disk space to host the new
virtual machine, click Previous to return to the Volume Configuration
screen and reduce the size of one or more volumes. You can also
override the default placement options that VMM uses to calculate the
host ratings. Any changes that you make apply only for the virtual
machine that you are deploying.
Note: This path refers to the drives that are free to allocate the host
machine. One drive is allocated to one virtual machine. You can either
type or click Browse to select the relevant path.
11 Specify the actions you want the VM to perform when the physical
server starts or stops.
Select the actions as required, and then click Next. The
Conversion Information screen appears.
12 Verify if there are any issues with the conversion, and then click
Next. The Summary screen appears.
Observation
When you create a VM from a physical machine in the online mode, all
of the data on the physical machine is not saved in the VM. This
happens if an IOM product is installed in the physical machine, for
example Historian, which acquires data from a remote SF-enabled
Application Server.
Since the physical machine operates during the conversion, any data
change that happens during the conversion is stored in the physical
machine, but not saved in the VM. The state of the created VM is same
as the source machine at the time the conversion is initiated. Hence,
any data change that takes place during the conversion is not saved in
the VM.
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM server configuration, if required.
Note: You can either type the computer name or click Browse to
select the required computer name.
Note: You can either type or click Browse to select the relevant owner
name.
8 Specify the number of processors and memory for the new VM.
Select the required figures from the Number of processors and
Memory boxes, and then click Next. The Select Host screen
appears.
Note: All hosts that are available for placement are given a rating of
0 to 5 stars based on their suitability to host the virtual machine. The
ratings are based on the hardware, resource requirements, and
expected resource usage of the virtual machine. The ratings are also
based on placement settings that you can customize for the VMM or
for individual virtual machine deployments. However, the ratings are
recommendations. You can select any host that has the required disk
space and memory available.
Important: In SCVMM 2008 R2, the host ratings that appear first
are based on a preliminary evaluation by SCVMM. The ratings are for
the hosts that run Windows Server 2008 R2 or ESX Server. Click a
host to view the host rating based on a more thorough evaluation.
This tab displays the status of the host and lists the virtual
machines that are currently deployed on it.
• Ratings Explanation
This tab lists the conditions that cause a host to receive a zero
rating.
• SAN Explanation
Note: If no host in the list has sufficient disk space to host the new
virtual machine, click Previous to return to the Volume Configuration
screen and reduce the size of one or more volumes. You can also
override the default placement options that VMM uses to calculate the
host ratings. Any changes that you make apply only for the virtual
machine that you are deploying.
Note: This path refers to the drives that are free to allocate the host
machine. One drive is allocated to one virtual machine. You can either
type or click Browse to select the relevant path.
12 Specify the actions you want the VM to perform when the physical
server starts or stops.
Select the actions as required, and then click Next. The
Conversion Information screen appears.
13 Verify if there are any issues with the conversion, and then click
Next. The Summary screen appears.
Observation
When you create a VM from a physical machine in the offline mode,
and if an IOM product is installed in the physical machine, for
example Historian, which acquires data from a remote Application
Server that is SF-enabled, select the Turn off source computer after
conversion check box. This helps save all data in the created VM.
Note: If you do not select the Turn off source computer after
conversion check box, all data changes that take place during the
conversion is saved in the source machine.
Category Recommendation
Category Recommendation
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM server configuration, if required.
Note: You can either type or click Browse to select the relevant owner
name. The owner must have an Active Directory domain account.
9 Create a template.
Click Create. The Jobs window appears.
View the status of the template you created. The completed status
confirms that the template has been created successfully.
Considerations
• You cannot change the system disk or startup disk configuration.
• Templates are database objects that are displayed in the library.
The templates are displayed in the VMs and Templates folder in
the Library Server.
Requirements
• The virtual hard disk must have a supporting OS installed.
• The administrator password on the virtual hard disk should be
blank as part of the Sysprep process. However, the administrator
password for the guest OS profile may not be blank.
• For customized templates, the OS on the virtual hard disk must be
prepared by removing the computer identity information. For
Windows operating systems, you can prepare the virtual hard disk
by using the Sysprep tool.
Prerequisite
The template of the source VM should be created before creating the
new VM.
For more information on creating a template, refer to "Creating a
Template from an Existing VM" on page 659.
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM server configuration, if required.
Note: You can either type or click Browse to select the relevant owner
name. The owner must have an Active Directory domain account.
Note: All hosts that are available for placement are given a rating of
0 to 5 stars based on their suitability to host the virtual machine. The
ratings are based on the hardware, resource requirements, and
expected resource usage of the virtual machine. The ratings are also
based on placement settings that you can customize for the VMM or
for individual virtual machine deployments. However, the ratings are
recommendations. You can select any host that has the required disk
space and memory available.
Important: In SCVMM 2008 R2, the host ratings that appear first
are based on a preliminary evaluation by SCVMM. The ratings are for
the hosts that run Windows Server 2008 R2 or ESX Server. Click a
host to view the host rating based on a more thorough evaluation.
This tab displays the status of the host and lists the virtual
machines that are currently deployed on it.
• Ratings Explanation
This tab lists the conditions that cause a host to receive a zero
rating.
• SAN Explanation
c From the Specify the operating system you will install in the
virtual machine list, select the operating system based on the
ISO selected, and then click Next. The Summary screen
appears.
10 Create a template.
Click Create. The Jobs window appears.
View the status of the template you created. The completed status
confirms that the template has been created successfully.
Product Recommendation
Note: By default, the port number is 8100. However, you can modify it
in the SCVMM server configuration, if required.
b Click Browse to select the .VHD image, and then click OK. The
Virtual Machine Identity screen appears.
Note: You can either type or click Browse to select the relevant owner
name. The owner must have an Active Directory domain account.
Note: All hosts that are available for placement are given a rating of
0 to 5 stars based on their suitability to host the virtual machine. The
ratings are based on the hardware, resource requirements, and
expected resource usage of the virtual machine. The ratings are also
based on placement settings that you can customize for the VMM or
for individual virtual machine deployments. However, the ratings are
recommendations. You can select any host that has the required disk
space and memory available.
Important: In SCVMM 2008 R2, the host ratings that appear first
are based on a preliminary evaluation by SCVMM. The ratings are for
the hosts that run Windows Server 2008 R2 or ESX Server. Click a
host to view the host rating based on a more thorough evaluation.
This tab displays the status of the host and lists the virtual
machines that are currently deployed on it.
• Ratings Explanation
This tab lists the conditions that cause a host to receive a zero
rating.
• SAN Explanation
c From the Specify the operating system you will install in the
virtual machine list, select the operating system based on the
template selected, and then click Next. The Summary screen
appears.
10 Create a template.
Click Create. The Jobs window appears.
View the status of the template you created. The completed status
confirms that the template has been created successfully.
Recommendation
When taking ghost backup, ensure that all drives where programs are
installed are part of the backup.
Chapter 7
Implementing Backup
Strategies in a Virtualized
Environment
Checkpointing Method
In this method you can take point-in-time checkpoints (snapshots) of
the entire VM. We recommend this method as it ensures data
consistency and allows for a fast and complete restore of the entire
VM. One of the few disadvantages in this method is that you need to
restore the entire checkpoint even if a file is lost or corrupt.
In a Microsoft virtualized environment, you can take and restore
checkpoints using either System Center Virtual Machine Manager
2008 R2 (VMM) or Microsoft® Hyper-V Manager. The following
sections describe how to implement backup strategies using SCVMM.
Note: By default, the port number is 8100. However, you can modify
it, if required.
Note: To create checkpoints of all VMs, select all VMs together and
then right-click the selection.
Note: The New Checkpoint window does not appear if you are
creating checkpoints for all VMs.
Note: By default, the Name box displays the name of the VM and the
time when the checkpoint is created.
This window displays all the checkpoints created for the VM. The
corresponding details indicate the date and time when each checkpoint
was created. A green dot appears below the checkpoint you created
indicating that it is now active. Click OK to exit the window.
Note: By default, the port number is 8100. However, you can modify
it, if required.
Note: To create checkpoints of all VMs, select all VMs together and
then right-click the selection.
Note: The New Checkpoint window does not appear, if you are
creating checkpoints for all VMs.
Note: By default, the Name box displays the name of the VM and the
time when the checkpoint is created.
This window displays all the checkpoints created for the VM. The
corresponding details indicate the date and time when each checkpoint
was created. A green dot appears below the checkpoint you created
indicating that it is now active. Click OK to exit the window.
Restoring Checkpoints
You can revert a virtual machine to a previous state by restoring it to
the required checkpoint. When you restore a virtual machine to a
checkpoint, VMM stops the virtual machine and the files on the virtual
machine are restored to their previous state.
Note: By default, the port number is 8100. However, you can modify
it, if required.
A green dot appears below the checkpoint that you restored indicating
that it is now active. Click OK to exit the window.
Note: By default, the port number is 8100. However, you can modify
it, if required.
A green dot appears below the checkpoint you restored indicating that
it is now active. Click OK to exit the window.
Note: By default, the port number is 8100. However, you can modify
it, if required.
4 Create UDO1.
In Application Server under Platform, Engine, and Area, create
UDO1.
5 Select the VM.
In the Virtual Machine Manager window, right-click the VM again.
The VM menu appears.
6 Make a new checkpoint.
a Click New checkpoint. The New checkpoint window appears.
b Modify the name of the checkpoint and type a description for it,
and then click Create. The checkpoint is created and the
Virtual Machine Manager window appears.
Note: By default, the Name box displays the name of the VM and the
time when the checkpoint is created.
8 Create UDO2.
In Application Server under Platform, Engine, and Area, create
UDO2.
A green dot appears below the checkpoint that you restored indicating
that it is now active. Click OK to exit the window.
Recommendations
Observation Recommendations
Historian
Application Server
Observation Recommendations
InTouch
Historian Client
Glossary
Application Engine A scan-based engine that hosts and executes the run-time logic
(AppEngine) contained within Automation Objects.
application object An Automation Object that represents some element of your
production environment. This can include things like:
• An automation process component. For example, a thermocouple,
pump, motor, valve, reactor, or tank
• An associated application component. For example, function block,
PID loop, sequential function chart, ladder logic program, batch
phase, or SPC data sheet
Application Server It is the supervisory control platform. Application Server uses existing
Wonderware products, such as InTouch for visualization, Wonderware
Historian for data storage, and the device integration product line like
a Data Access Server (DAServer) for device communications.
An Application Server can be distributed across multiple computers as
part of a single Galaxy namespace.
child partition Child partitions are made by the hypervisor in response to a request
from the parent partition. There are a couple of key differences
between a child partition and a parent/root partition. Child partitions
are unable to create new partitions. Child partitions do not have direct
access to devices (any attempt to interact with hardware directly is
routed to the parent partition). Child partitions do not have direct
access to memory. When a child partition tries to access memory the
hypervisor / virtualization stack re-maps the request to different
memory locations.
differencing disk A virtual hard disk that is associated with another virtual hard disk in
a parent-child relationship. The differencing disk is the child and the
associated virtual hard disk is the parent.
differencing virtual A virtual hard disk that stores the changes or "differences" to an
hard disk (diffdisk) associated parent virtual hard disk for the purpose of keeping the
parent intact. The differencing disk is a separate .vhd file (that may be
stored in a seperate location) that is associated with the .vhd file of the
parent disk. These disks are often referred to as "children" or "child"
disks to disintguish them from the "parent" disk. There can be only
one parent disk in a chain of differencing disks. There can be one or
more child disks in a differencing disk chain of disks that are "related"
to each other. Changes continue to accumulate in the differencing disk
until it is merged to the parent disk. See also virtual hard disk. A
common use for differencing disks is to manage storage space on a
virtualization server. For example, you can create a base parent disk-
such as a Windows 2008 R2 Standard base image - and use it as the
foundation for all other guest virtual machines and disks that will be
based on Windows Server 2008 R2.
dynamically A virtual hard disk that grows in size each time it is modified. This
expanding virtual type of virtual hard disk starts as a 3 KB .vhd file and can grow as
hard disk (dynamic large as the maximum size specified when the file was created. The
VHD, DVHD)
only way to reduce the file size is to zero out the deleted data and then
compact the virtual hard disk. See also virtual hard disk, VHD.
external virtual A virtual network that is configured to use a physical network adapter.
network These networks are used to connect virtual machines to external
networks. See also internal virtual network, private virtual network.
failover In server clusters, failover is the process of taking resource groups
offline on one node and bringing them online on another node.
fragmentation The scattering of parts of the same disk file over different areas of the
disk.
guest operating This is the operating system/runtime environment that is present
system inside a partition. Historically with Virtual Server / Virtual PC, in a
host operating system and a guest operating system where the host
ran on the physical hardware and the guest ran on the host. In Hyper-
V, all operating systems on the physical computer are running on top
of the hypervisor so the correct equivalent terms are parent guest
operating system and child guest operating system. Many find these
terms confusing and instead use physical operating system and guest
operating system to refer to parent and child guest operating systems,
respectively.
guests and hosts A guest virtual machine and host server are the two main building
blocks of virtualization. The guest virtual machine is a file that
contains a virtualized operating system and application, and the host
server is the hardware on which it runs. The other important
component is the hypervisor—the software that creates the guest
virtual machine and lets it interact with the host server. The
hypervisor also makes the host server run multiple guest virtual
machines.
historical storage The time series data storage system that compresses and stores high
system (Historian) volumes of time series data for later retrieval. The standard Historian
is the Wonderware Historian.
management The operating system that was originally installed on the physical
operating system machine when the Hyper-V role was enabled. After installing the
Hyper-V role, this operating system is moved into the parent partition.
The management operating system automatically launches when you
reboot the physical machine. The management operating system
actually runs in a special kind of virtual machine that can create and
manage the virtual machines that are used to run workloads and/or
different operating systems. These virtual machines are sometimes
also called child partitions. The management operating system
provides management access to the virtual machines and an execution
environment for the Hyper-V services. The management operating
system also provides the virtual machines with access to the hardware
resources it owns.
memory overcommit A hypervisor can let a guest VM use more memory space than that
available in the host server. This feature is called memory
overcommit. Memory overcommit is possible because most VMs use
only a little bit of their allocated physical memory. That frees up
memory for the few VMs that need more. Hypervisors with memory
overcommit features can identify unused memory and reallocate it to
more memory-intensive VMs as needed.
Network Load A Windows network component that uses a distributed algorithm to
Balancing (NLB) load-balance IP traffic across a number of hosts, helping to enhance
the scalability and availability of mission-critical, IP-based services.
network Network virtualization lets you combine multiple networks into one,
virtualization divide one network into many and even create software-only networks
between VMs. The basis of network virtualization is virtual network
software, to which there are two approaches: internal and external.
Internal network virtualization uses virtual network software to
emulate network connectivity among VMs inside a host server.
External network virtualization virtual network software to
consolidate multiple physical networks or create several virtual
networks out of one physical network.
NTFS An advanced file system that provides performance, security,
reliability, and advanced features that are not found in any version of
the file allocation table (FAT).
parent partition The parent partition can call hypervisor and request for new partitions
to be created. There can only be one parent partition. In the first
release of Hyper-V, the parent and root partitions are one and the
same.
physical computer The computer, or more specifically, the hardware that is running the
Hyper-V role.
physical processor It is the squarish chip that you put in your computer to make it run.
This is sometimes also referred to as a "package" or a "socket".
root partition This is the first partition on the computer. This is the partition that is
responsible for starting the hypervisor. It is also the only partition
that has direct access to memory and devices.
saved state A manner of storing a virtual machine so that it can be quickly
resumed (similar to a hibernated laptop). When you place a running
virtual machine in a saved state, Virtual Server and Hyper-V stop the
virtual machine, write the data that exists in memory to temporary
files, and stop the consumption of system resources. Restoring a
virtual machine from a saved state returns it to the same condition it
was in when its state was saved.
small computer A standard high-speed parallel interface used for connecting
system interface microcomputers to peripheral devices, such as hard disks and printers,
(SCSI) and to other computers and local area networks (LANs).
.vfd or virtual floppy The file format for a virtual floppy disk. See also virtual floppy disk.
disk
.vhd or virtual hard The file format for a virtual hard disk, the storage medium for a
disk virtual machine. It can reside on any storage topology that the
management operating system can access, including external devices,
storage area networks, and network-attached storage.
virtual hardware The computing resources that the host server assigns to a guest VM
make up the virtual hardware platform. The hypervisor controls the
virtual hardware platform and allows the VM to run on any host
server, regardless of the physical hardware. The virtual hardware
platform includes memory, processor cores, optical drives, network
adapters, I/O ports, a disk controller and virtual hard disks.
Virtualization lets a user adjust the levels of these resources on each
VM as needed.
virtual machine A virtual machine (VM) is a file that includes an application and an
underlying operating system combines with a physical host server and
a hypervisor to make server virtualization possible. A virtual machine
is a super-set of a child partition. A virtual machine is a child partition
combined with virtualization stack components that provide
functionality, such as access to emulated devices, and features like
being able to save state a virtual machine. As a virtual machine is
essentially a specialized partition, the terms "partition" and "virtual
machine" is often used interchangeably. But, while a virtual machine
will always have a partition associated with it, a partition may not
always be a virtual machine.
virtual machine bus A communications line used in Hyper-V by virtual machines and
certain types of virtual devices. The virtual devices that use virtual
machine bus have been optimized for use in virtual machines.
virtual machine The configuration of the resources assigned to a virtual machine.
configuration Examples include devices such as disks and network adapters, as well
as memory and processors.
Virtual machine A Hyper-V management tool that allows a running virtual machine to
connection be managed through an interactive session.
virtual machine The SCVMM service that provides management access to virtual
management service machines.
virtual network A virtual version of a physical network switch. A virtual network can
be configured to provide access to local or external network resources
for one or more virtual machines.
virtual network The Hyper-V component used to create and manage virtual networks.
manager
virtualization server A physical computer with the Hyper-V role installed. This server
contains the management operating system and it provides the
environment for creating and running virtual machines. Sometimes
referred to as a server running Hyper-V.
virtualization stack The virtualization stack is everything else that makes up Hyper-V.
This is the user interface, management services, virtual machine
processes, emulated devices.
virtual processor A virtual processor is a single logical processor that is exposed to a
partition by the hypervisor. Virtual processors can be mapped to any of
the available logical processors in the physical computer and are
scheduled by the hypervisor to allow you to have more virtual
processors than you have logical processors.
virtual switch A virtual switch is the key to network virtualization. It connects
physical switches to VMs through physical network interface cards
and ports. A virtual switch is similar to a virtual bridge, which many
virtualization platforms use, but it is more advanced. Virtual LANs,
virtualization WMI The WMI provider for virtualization that can be used with the
provider hypervisor API to enable developers and scripters to build custom
tools, utilities, and enhancements for the virtualization platform.
VMDK The Virtual Machine Disk (VMDK) file format is used to identify
VMware virtual machines. (In virtualization, the hypervisor creates a
VM file that consists of an operating system instance, an application
and other associated components.) Other platforms that support the
VMDK file format include Sun Microsystems xVM, Oracle VirtualBox,
and QEMU. It competes with Microsoft's Virtual Hard Disk format,
which is used in Virtual Server and Hyper-V.
Index
A applications
abstraction layer 20 configure 552
AppEngine remote 552
live migration 102 system platform 541
performance 128
quick migration 110 B
redundant 535 backup
AppEngine1 implementing strategies 687
live migration 195 preparing virtual image 676
quick migration 208
AppEngine2 C
live migration 198 checkpoint
quick migration 211 taking offline vm 688, 691
application using vmm 688
access 554 checkpoints
allow 554 restore 702
node 136, 252 restore system platform products (offline
mode) 708
runtime 136, 252
restore system platform products (online
server 136, 251, 252
mode) 708
server, application
restoring 695
runtime 42
cluster
users 554
communication 56, 151
Application Server
configuration 46
configuration 181
configure 43, 59, 138, 154, 253
runtime 136
configuring 43, 59
virtual machine 135, 136
create 51, 146, 463
Application Server Runtime node
creating 51, 146
virtual machine 136
failover 43, 46, 138, 139, 253, 254, 604
Application Server Runtime node 2
installing 43, 139
virtual machine 136
GR 99, 192 O
Historian 99 offline vm
HistorianClient 104 restoring checkpoint 695
InTouch 104, 192
WIS 104 P
physical machine
M create 634, 644
majority quorum tips 656
file share 59 product
medium scale virtualization dependencies 702
configuring system platform products 178 restore checkpoints 702
setting up 133
working 132 Q
multi-monitor quick migration
display (single) 568 nodes 114
system platform nodes 566 quorum
multi-monitors majority 59
single display 568
R
N recommendation
network preparing virtual image from ghost
cluster communication 56, 151 backup 685
communication 525 Recovery Point Objective
configure 516 expected 86
create 516, 519 HA small configuration 86
disabling 151 observations 86, 181
disconnect 124, 230 recovery point objective
failover 96, 124 expected 181
failure 124 Recovery Time Objective
internal adapter 525 expected 86, 181
leveraging 573 HA small configuration 86
leveraging load balancing 573 medium configuration 181
plant 56, 151, 525 observations 86, 181
private 96 redundant application server
public 230 communication 534
remote desktop broker 573 remote applications
requirements 137, 252 access 556
setting up 572 accessing 556
virtual adapter 516, 522 client node 556
virtual switches 516, 519 configure 552
virtualization server 230 remote desktop session host server
working 569 node 552
network load balancing system platform applications 541
setting up 572 remote desktop
working 569 configure 543, 596
network location connect 600
creating virtual image 611 connection broker 575, 596
with ISO file 611 disconnect session 600