DoubleTakeAvailabilityLinuxUsersGuide 7.1.2
DoubleTakeAvailabilityLinuxUsersGuide 7.1.2
Version
'RXEOH7DNH$YDLODELOLW\IRU/LQX[8VHU
V*XLGH
Notices
Double-Take Availability for Linux User's Guide Version 7.1.2, Wednesday, October 28, 2015
l Product UpdatesCheck your service agreement to determine which updates and new releases
you may be eligible for. Product updates can be obtained from the support web site at
http://www.VisionSolutions.com/SupportCentral.
l SalesIf you need maintenance renewal, an upgrade license key, or other sales assistance, contact
your reseller/distributor or a Vision Solutions sales representative. Contact information is available on
the Vision Solutions Worldwide Locations and Contacts web page at
http://www.VisionSolutions.com/Company/Vision-HA-Locations.aspx.
l Technical SupportIf you need technical assistance, you can contact CustomerCare. All basic
configurations outlined in the online documentation will be supported through CustomerCare. Your
technical support center is dependent on the reseller or distributor you purchased your product from
and is identified on your service agreement. If you do not have access to this agreement, contact
CustomerCare and they will direct you to the correct service provider. To contact CustomerCare, you
will need your serial number and license key. Contact information is available on the Vision Solutions
CustomerCare web page at http://www.VisionSolutions.com/Support/Support-Overview.aspx .
l Professional ServicesAssistance and support for advanced configurations may be referred to a
Pre-Sales Systems Engineer or to Professional Services. For more information, see the Windows
and Linux tab on the Vision Solutions Consulting Services web page at
http://www.visionsolutions.com/services-support/services/overview .
l TrainingClassroom and computer-based training are available. For more information, see the
Double-Take Product Training web page at http://www.VisionSolutions.com/Services/DT-
Education.aspx.
l Linux man pagesMan pages are installed and available on Double-Take Linux servers. These
documents are bound by the same Vision Solutions license agreement as the software installation.
This documentation is subject to the following: (1) Change without notice; (2) Furnished pursuant to a
license agreement; (3) Proprietary to the respective owner; (4) Not to be copied or reproduced unless
authorized pursuant to the license agreement; (5) Provided without any expressed or implied warranties, (6)
Does not entitle Licensee, End User or any other party to the source code or source code documentation of
anything within the documentation or otherwise provided that is proprietary to Vision Solutions, Inc.; and (7)
All Open Source and Third-Party Components (OSTPC) are provided AS IS pursuant to that OSTPCs
license agreement and disclaimers of warranties and liability.
Vision Solutions, Inc. and/or its affiliates and subsidiaries in the United States and/or other countries
own/hold rights to certain trademarks, registered trademarks, and logos. Hyper-V and Windows are
registered trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a
registered trademark of Linus Torvalds. vSphere is a registered trademark of VMware. All other trademarks
are the property of their respective companies. For a complete list of trademarks registered to other
companies, please visit that companys website.
2015 Vision Solutions, Inc. All rights reserved.
Contents
Chapter 1 Double-Take Availability overview 6
Core operations 7
Supported configurations 10
Chapter 2 Double-Take clients 17
Replication Console 18
Using Replication Console workspaces 19
Clearing stored security credentials 20
Failover Control Center 21
Setting the frequency of Failover Control Center console refreshes 22
Double-Take Console 23
Double-Take Console requirements 25
Console options 26
Managing servers 29
Adding servers 34
Providing server credentials 37
Viewing server details 38
Editing server properties 40
Server licensing 41
Viewing server logs 43
Managing VMware servers 45
Chapter 3 Files and folders protection 46
Files and folders requirements 47
DTSetup 50
Running DTSetup 51
Setup tasks 52
Activating your server 53
Modifying security groups 54
Configuring block device replication 55
Configuring server settings 57
Configuring driver performance settings 58
Starting and stopping the daemon 59
Starting DTCL 60
Viewing documentation and troubleshooting tools 61
DTSetup menus 62
Data protection 63
Establishing a data connection using the automated Connection Wizard 64
Creating a replication set 66
Establishing a connection manually using the Connection Manager 69
Establishing a connection across a NAT or firewall 73
Simulating a connection 75
Protection monitoring 76
Monitoring a data workload 77
Log files 83
Viewing the log files through a text editor 84
Viewing the Double-Take log file through the Replication Console 85
Configuring the properties of the Double-Take log file 87
Contents 3
Double-Take log messages 88
Monitoring the Linux system log 94
E-mailing system messages 105
Statistics 108
Configuring the properties of the statistics file 109
Viewing the statistics file 110
Statistics 112
SNMP 118
Configuring SNMP on your server 119
SNMP traps 121
SNMP statistics 124
Connections 127
Data queues 128
Queuing data 130
Auto-disconnect and auto-reconnect 133
Reconnecting automatically 135
Pausing and resuming target processing 136
Disconnecting a connection 137
Mirroring 138
Stopping, starting, pausing, or resuming mirroring 139
Mirroring automatically 141
Removing orphan files 143
Replication 145
Replication capabilities 146
Replication sets 148
Creating a replication set 150
Creating or modifying replication rules manually 153
Selecting a block device for replication 155
Modifying a replication set 156
Renaming and copying a replication set 157
Calculating replication set size 158
Exporting and importing a replication set 160
Deleting a replication set 161
Starting replication 162
Inserting tasks during replication 163
Verification 164
Verifying manually 165
Verifying on a schedule 166
Configuring the verification log 168
Data transmission 170
Stopping, starting, pausing, and resuming transmission 171
Scheduling data transmission 171
Limiting transmission bandwidth 176
Compressing data for transmission 178
Failover 180
Configuring failover monitoring 181
WAN considerations 184
Protecting NFS exports 186
Protecting Samba shares 187
Contents 4
Editing failover monitoring configuration 188
Monitoring failover monitoring 189
Failing over 192
Removing failover monitoring configuration 192
Failback and restoration 193
Restoring then failing back 194
Failing back then restoring 199
Server settings 202
Identifying a server 203
Licensing a server 205
Configuring server startup options 208
Configuring network communication properties for a server 210
Queuing data 212
Configuring source data processing options 215
Configuring target data processing options 217
Specifying the Double-Take database storage files 218
Specifying file names for logging and statistics 219
E-mailing system messages 221
Security 224
Logging on and off 225
Chapter 4 Full server protection 227
Full server requirements 228
Creating a full server job 233
Managing and controlling full server jobs 245
Viewing full server job details 252
Validating a full server job 256
Editing a full server job 257
Viewing a full server job log 259
Failing over full server jobs 261
Reversing full server jobs 263
Chapter 5 Full server to ESX appliance protection 264
Full server to ESX appliance requirements 265
Creating a full server to ESX appliance job 270
Managing and controlling full server to ESX appliance jobs 287
Viewing full server to ESX appliance job details 294
Validating a full server to ESXappliance job 297
Editing a full server to ESX appliance job 298
Viewing a full server to ESX appliance job log 300
Failing over full server to ESX appliance jobs 302
Contents 5
Chapter 1 Double-Take Availability overview
Double-Take Availability ensures the availability of critical workloads. Using real-time replication and
failover, you can protect data or entire servers, running on physical or virtual servers.
You identify what you want to protect on your production server, known as the source, and replicate that
to a backup server, known as the target. The target server, on a local network or at a remote site, stores
a replica copy of the data from the source. Double-Take monitors any changes to the source and sends
the changes to the replica copy stored on the target server. By replicating only the file changes rather
than copying an entire file, Double-Take allows you to more efficiently use resources.
Mirroring
Mirroring is the process of transmitting user-specified data from the source to the target so that an
identical copy of data exists on the target. When Double-Take initially performs mirroring, it copies all of
the selected data, including file attributes and permissions. Mirroring creates a foundation upon which
Double-Take can efficiently update the target server by replicating only file changes.
If subsequent mirroring operations are necessary, Double-Take can mirror specific files or blocks of
changed data within files. By mirroring only files that have changed, network administrators can expedite
the mirroring of data on the source and target servers. Mirroring has a defined end point when all of the
selected files from the source have been transmitted to the target. When a mirror is complete, the target
contains a copy of the source files at that point in time.
1. User and application requests are sent to the source name or IP address.
2. Data on the source is mirrored and replicated to the target.
3. The target monitors the source for failure.
4. In the event the source fails, the target stands in for the source. User and application requests are
still sent to the source name or IPaddress, which are now running on the target.
Not all types of jobs support all of these configurations. See the requirements of each job type to
determine which configurations are supported.
Description
One target server, having no production activity, is dedicated to support one source
server. The source is the only server actively replicating data.
Applications
l This configuration is appropriate for offsite disaster recovery, failover, and critical
data backup. This is especially appropriate for critical application servers.
l This is the easiest configuration to implement, support, and maintain.
Considerations
l This configuration requires the highest hardware cost because a target server is
required for every source server.
l You must pause the target when backing up database files on the target.
Description
Each server acts as both a source and target actively replicating data to each other
Applications
This configuration is appropriate for failover and critical data backup. This configuration
is more cost-effective than the Active/Standby configuration because there is no need
to buy a dedicated target server for each source. In this case, both servers can do full-
time production work.
Considerations
l Coordination of the configuration of Double-Take and other applications can be
more complex than the one to one active/standby configuration.
l During replication, each server must continue to process its normal workload.
l Administrators must avoid selecting a target destination path that is included in the
sources protected data set. Any overlap will cause an infinite loop.
l To support the production activities of both servers during failover without reducing
performance, each server should have sufficient disk space and processing
resources.
l Failover and failback scripts must be implemented to avoid conflict with the existing
production applications.
l You must pause the target when backing up database files on the target.
Description
Many source servers are protected by one target server.
Applications
This configuration is appropriate for offsite disaster recovery. This is also an excellent
choice for providing centralized tape backup because it spreads the cost of one target
server among many source servers.
Considerations
l The target server must be carefully managed. It must have enough disk space and
RAM to support replication from all of the source systems. The target must be able
to accommodate traffic from all of the servers simultaneously.
l If using failover, scripts must be coordinated to ensure that, in the event that the
target server stands in for a failed server, applications will not conflict.
l You must pause the target when backing up database files on the target.
Description
One source server sends data to multiple target servers. The target servers may or
may not be accessible by one another.
Applications
This configuration provides offsite disaster recovery, redundant backups, and data
distribution. For example, this configuration can replicate all data to a local target server
and separately replicate a subset of the mission-critical data to an offsite disaster
recovery server.
Considerations
l Updates are transmitted multiple times across the network. If one of the target
servers is on a WAN, the source server is burdened with WAN communications.
l You must pause the target when backing up database files on the target.
l If you failover to one of the targets, the other targets stop receiving updates.
Description
The source servers sends replicated data to a target server, which acts as a source
server and sends data to a final target server, which is often offsite.
Applications
This is a convenient approach for integrating local high availability with offsite disaster
recovery. This configuration moves the processing burden of WAN communications
from the source server to the target/source server. After failover in a one to one, many
to one, or one to many configuration, the data on the target is no longer protected. This
configuration allows failover from the first source to the middle machine, with the third
machine still protecting the data.
Considerations
l The target/source server could become a single point of failure for offsite data
protection.
l You must pause the target when backing up database files on the target.
Description
Source and target components are loaded on the same server allowing data to be
replicated from one location to another on the same volume or to a separate volume on
the same server. These could be locally attached SCSI drives or Fibre Channel based
SAN devices.
Applications
This configuration is useful upgrading storage hardware while leaving an application
online. Once the data is mirrored, you can swap the drive in the disk manager. If the
source and target copies of the data are located on different drives, this configuration
supports high availability of the data in the event that the source hard drive fails.
Considerations
l This configuration does not provide high availability for the entire server.
l This configuration must be configured carefully so that an infinite loop is not created.
l Full server and full server to ESX appliance jobsFull server and full server to ESX
appliance jobs use the Double-Take Console to control and manage the jobs and failover. This
client installation is detailed in the Double-Take Installation, Licensing, and Activation document.
After the installation is complete, the console can be started from the Windows Start menu. Linux
full server and full server to ESX appliance jobs can also use the Double-Take PowerShell
scripting to control and manage these jobs types. For more information, see the Double-Take
PowerShell Scripting Guide.
l Double-Take Console on page 23
l HeadlinesThe top section gives a quick overview of any jobs that require attention as well as
providing quick access buttons.
l These jobs require attentionAny jobs that require attention (those in an error state)
are listed. You will see the source and target server names listed, as well as a short
description of the issue that requires your attention. If the list is blank, there are no jobs that
require immediate attention.
l ToolsSelect this drop-down list to launch other Vision Solutions consoles.
edit, add, remove, or manage the servers in your console. See Managing servers on page
29.
l Jobs SummaryThe bottom section summarizes the jobs in your console.
l Total number of jobsThis field displays the number of jobs running on the servers in
your console.
l View jobs with errorsSelect this link to go to the Manage Jobs page, where the
At the bottom of the Double-Take Console, you will see a status bar. At the right side, you will find links
for Jobs with warnings and Jobs with errors. This lets you see quickly, no matter which page of the
console you are on, if you have any jobs that need your attention. Select this link to go to the Manage
Jobs page, where the appropriate Filter: Jobs with warnings or Filter: Jobs with errors will
automatically be applied.
The Double-Take installation prohibits the console from being installed on Server Core.
Because Windows 2012 allows you to switch back and forth between Server Core and a full
installation, you may have the console files available on Server Core, if you installed Double-
Take while running in full operating system mode. In any case, you cannot run the Double-Take
Console on Server Core.
credentials, after the specified retry interval, if the server login credentials are not accepted.
Keep in mind the following caveats when using this option.
l This is only for server credentials, not job credentials.
l A set of credentials provided for or used by multiple servers will not be retried for the
specified retry interval on any server if it fails on any of the servers using it.
l Verify your environment's security policy when using this option. Check your policies
for failed login lock outs and resets. For example, if your policy is to reset the failed
login attempt count after 30 minutes, set this auto-retry option to the same or a
slightly larger value as the 30 minute security policy to decrease the chance of a
lockout.
l Restarting the Double-Take Console will automatically initiate an immediate login.
l Entering new credentials will initiate an immediate login using the new credentials.
l Retry on this intervalIf you have enabled the automatic retry, specify the length of time,
will need to use the legacy protocol port. This applies to Double-Take versions 5.1 or
earlier.
l DiagnosticsThis section assists with console troubleshooting.
lExport Diagnostic DataThis button creates a raw data file that can be used for
debugging errors in the Double-Take Console. Use this button as directed by technical
support.
l View Log FileThis button opens the Double-Take Console log file. Use this button as
directed by technical support. You can also select View, View Console Log File to open
the Double-Take Console log file.
l View Data FileThis button opens the Double-Take Console data file. Use this button as
directed by technical support. You can also select View, View Console Data File to open
the Double-Take Console data file.
l Automatic UpdatesThis section is for automatically updating your console.
will close and your web browser will open to the Vision Solutions web site where you
can download and install the update.
l No update availableIf you are using the most recent console software, that will
there is an error, the console will report that information. The console log contains a
more detailed explanation of the error. Click Check using Browser if you want to
open your browser to check for console software updates. You will need to use your
browser if your Internet access is through a proxy server.
l License InventoryThis section controls if the console contains a license inventory. This
feature may not appear in your console if your service provider has restricted access to it.
l Enable license inventoryThis option allows you to use this console to manage the
Double-Take licenses assigned to your organization. When this option is enabled, the
Manage License Inventory page is also enabled.
l Default Installation OptionsAll of the fields under the Default Installation Options section
are used by the push installation on the Install page. The values specified here will be the default
options used for the push installation.
l Activate online after install completesSpecify if you want to activate your Double-
Take licenses at the end of the installation. The activation requires Internet access from the
console machine or the machine you are installing to. Activation will be attempted from the
console machine first and if that fails, it wil be attempted from the machine you are installing
to. If you choose not to have the installation activate your licenses, you will have to activate
them through the console license inventory or the server's properties page.
l Location of install foldersSpecify the parent directory location where the installation
files are located. The parent directory can be local on your console machine or a UNC path.
l WindowsSpecify the parent directory where the Windows installation file is
located. The default location is where the Double-Take Console is installed, which is
\Program Files\Vision Solutions\Double-Take. The console will automatically use the
\i386 subdirectory for 32-bit installations and the \x64 subdirectory for 64-bit
installations. These subdirectories are automatically populated with the Windows
installation files when you installed the console. If you want to use a different location,
you must copy the \i386 or \x64 folder and its installation file to the different parent
directory that you specify.
Take Console installation location, you must make sure they are in a \Linux
subdirectory under the parent directory you specified for Location of install
folders. Copy the Linux .deb or .rpm files from your download to the \Linux
subdirectory. Make sure you only have a single version of the Linux installation
files in that location. The push installation cannot determine which version to
install if there are multiple versions in the \Linux subdirectory.
l If you have already deployed your Linux virtual recovery appliance, specify the
UNC path to the installers share on the appliance. For example, if your
appliance is called DTAppliance, use the path \\DTAppliance\installers for the
Location of install folders. The console will automatically use the
installation files in the \Linux subdirectory of this share location.
l Default Windows Installation OptionsAll of the fields under the Default Installation
Options section are used by the push installation on the Install page. The values specified here
will be the default options used for the push installation.
l Temporary folder for installation packageSpecify a temporary location on the server
where you are installing Double-Take where the installation files will be copied and run.
l Installation folderSpecify the location where you want to install Double-Take on each
server. This field is not used if you are upgrading an existing version of Double-Take. In that
case, the existing installation folder will be used.
l Queue folderSpecify the location where you want to store the Double-Take disk queue
on each server.
l Amount of system memory to useSpecify the maximum amount of memory, in MB,
Queue folder that must be available at all times. This amount should be less than the
amount of physical disk space minus the disk size specified for Limit disk space for
queue.
l Do not use disk queueThis option will disable disk queuing. When system memory has
specified Queue folder for disk queuing, which will allow the queue usage to automatically
expand whenever the available disk space expands. When the available disk space has
been used, Double-Take will automatically begin the auto-disconnect process.
l Limit disk space for queueThis option will allow you to specify a fixed amount of disk
space, in MB, in the specified Queue folder that can be used for Double-Take disk
queuing. When the disk space limit is reached, Double-Take will automatically begin the
auto-disconnect process.
l Default Linux Installation OptionsAll of the fields under the Default Installation Options
section are used by the push installation on the Install page. The values specified here will be the
default options used for the push installation.
l Temporary folder for installation packageSpecify a temporary location on the server
where you are installing Double-Take where the installation files will be copied and run.
If you have uninstalled and reinstalled Double-Take on a server, you may see the server twice
on the Manage Servers page because the reinstall assigns a new unique identifier to the
server. One of the servers (the original version) will show with the red X icon. You can safely
remove that server from the console.
Column 1 (Blank)
The first blank column indicates the machine type.
Offline server which means the console cannot communicate with this machine.
Server error which means the console can communicate with the machine, but it
cannot communicate with Double-Take on it.
Column 2 (Blank)
The second blank column indicates the security level
Add Servers
Adds a new server. This button leaves the Manage Servers page and opens the Add
Servers page. See Adding servers on page 34.
Remove Server
Removes the server from the console.
Provide Credentials
Changes the login credentials that the Double-Take Console use to authenticate to a
server. This button opens the Provide Credentials dialog box where you can specify
the new account information. See Providing server credentials on page 37. You will
remain on the Manage Servers page after updating the server credentials.
Uninstall
Uninstalls Double-Take on the selected server.
Copy
Copies the information for the selected servers. You can then paste the server
information as needed. Each server is pasted on a new line, with the server information
being comma-separated.
Paste
Pastes a new-line separated list of servers into the console. Your copied list of servers
must be entered on individual lines with only server names or IP addresses on each
line.
Activate Online
Activates licenses and applies the activation keys to servers in one step. You must have
Internet access for this process. You will not be able to activate a license that has
already been activated.
Refresh
Refreshes the status of the selected servers.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
console. See the following NAT configuration section if you have a NAT environment.
l User nameFor a server, specify a user that is a member of the dtadmin or dtmon
4. After you have specified the server or appliance information, click Add.
5. Repeat steps 3 and 4 for any other servers or appliances you want to add.
6. If you need to remove servers or appliances from the list of Servers to be added, highlight a
server and click Remove. You can also remove all of them with the Remove All button.
7. When your list of Servers to be added is complete, click OK.
In this table, public addresses are those addresses that are publicly available when a
server is behind a NATrouter. Private addresses are those addresses that are privately
available when a server is behind a NATrouter. An address that is not labeled as public or
private are for servers that are not behind a NAT router. This is generally a public address
but is not named as such in this table to try to more clearly identify when a public NAT
address needs to be used.
Importing and exporting servers from a server and group configuration file
You can share the console server and group configuration between machines that have the Double-
Take Console installed. The console server configuration includes the server group configuration, server
name, server communications ports, and other internal processing information.
To export a server and group configuration file, select File, Export Servers. Specify a file name and
click Save. After the configuration file is exported, you can import it to another console.
When you are importing a console server and group configuration file from another console, you will not
lose or overwrite any servers that already exist in the console. For example, if you have server alpha in
your console and you insert a server configuration file that contains servers alpha and beta, only the
server beta will be inserted. Existing group names will not be merged, so you may see duplicate server
groups that you will have to manually update as desired.
To import a server and group configuration file, select File, Import Servers. Locate the console
configuration file saved from the other machine and click Open.
Server name
The name or IPaddress of the server. If you have specified a reservedIPaddress, it
will be displayed in parenthesis.
Operating system
The server's operating system version
Roles
The role of this server in your Double-Take environment. In some cases, a server can
have more than one role.
l Engine RoleSource or target server
l Image Repository RoleA target for a DR protection job or a source for a
DRrecovery job
l Controller RoleController appliance for an agentless vSphere job
l Replication Appliance RoleReplication appliance for an agentless vSphere job
l Reporting ServiceDouble-Take Reporting Service server
Status
There are many different Status messages that keep you informed of the server
activity. Most of the status messages are informational and do not require any
administrator interaction. If you see error messages, check the rest of the server
details.
Activity
There are many different Activity messages that keep you informed of the server
activity. Most of the activity messages are informational and do not require any
administrator interaction. If you see error messages, check the rest of the server
details.
Connected via
The IP address and port the server is using for communcations. You will also see the
Double-Take protocol being used to communicate with server. The protocol will be
XML web services protocol (for servers running Double-Take version 5.2 or later) or
Legacy protocol (for servers running version 5.1 or earlier).
Version
The product version information
The fields and buttons in the Licensing section will vary depending on your Double-Take
Console configuration and the type of license keys you are using.
l Add license keys and activation keysYour license key or activation key is a 24 character,
alpha-numeric key. You can change your license key without reinstalling, if your license changes.
To add a license key or activation key, type in the key or click Choose from inventory and select
a key from your console's license inventory. Then click Add.
The license inventory feature cannot be enabled if your service provider has restricted
access to it.
l Current license keysThe server's current license key information is displayed. To remove a
If you are replacing an existing license key that has already been activated, you must
remove both the old license key and the old activation key. Then you can add a new
license key and activate it successfully. If you are updating an existing license key, do not
remove the old license key or old activation key. Add the new license key on top of the
existing license key.
l ActivationIf your license key needs to be activated, you will see an additional Activation
section at the bottom of the Licensing section. To activate your key, use one of the following
procedures.
l Activate onlineIf you have Internet access, you can activate your license and apply the
activated license to the server in one step by selecting Activate Online.
You will not be able to activate a license that has already been activated.
l Obtain activation key online, then activateIf you have Internet access, click the
hyperlink in the Activation section to take you to the web so that you can submit your
activation information. Complete and submit the activation form, and you will receive an e-
mail with the activation key. Activate your server by entering the activation key in the Add
license keys and activations keys field and clicking Add.
l Obtain activation key offline, then activateIf you do not have Internet access, go to
If your Double-Take Availability license keys needs to be activated, you will have 14 days
to do so.
If you need to rename a server that already has a Double-Take license applied to it, you
should deactivate that license before changing the server name. That includes rebuilding
a server or changing the case (capitalization) of the server name (upper or lower case or
any combination of case). If you have already rebuilt the server or changed the server
name or case, you will have to perform a host-transfer to continue using that license.
The following table identifies the controls and the table columns in the Server logs window.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Server logs window. The messages are still logged to their respective files
on the server.
Clear
This button clears the Server logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
reopen the Server logs window.
Filter
From the drop-down list, you can select to view all log messages or only those
messages from the Double-Take engine or the Double-Take Management Service.
Time
This column in the table indicates the date and time when the message was logged.
Description
This column in the table displays the actual message that was logged.
Service
This column in the table indicates if the message is from the Double-Take engine or the
Double-Take Management Service.
VMware Server
The name of the VMware server
Full Name
The full name of the VMware server
User Name
The user account being used to access the VMware server
Remove Server
Remove the VMware server from the console.
Provide Credentials
Edit credentials for the selected VMware server. When prompted, specify a user
account to access the VMware server.
l NotesOracle Enterprise Linux support is for the mainline kernel only, not the
Unbreakable kernel.
l Operating systemRed Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version6.4 through 6.6
l NotesOracle Enterprise Linux support includes the mainline kernel only for
version 6.3 and includes both the mainline kernel and the Unbreakable kernel for
versions 6.4 and 6.5.
l Operating systemSUSELinux Enterprise
l Version10.3 and 10.4
XenPAE
l Kernel type for x86-64 (64-bit) architecturesDefault, SMP, Xen
l Kernel version2.6.32-33
l Operating systemUbuntu
l Version10.04.4
l Kernel version2.6.32-38
l Operating systemUbuntu
l Version12.04.2
l Kernel version3.5.0-23
l Operating systemUbuntu
l Version12.04.3
l Kernel version3.8.0-29
For all operating systems except Ubuntu, the kernel version must match the
expected kernel for the specified release version. For example, if /etc/redhat-
release declares the system to be a Redhat 5.10 system, the kernel that is installed
must match that.
l System MemoryThe minimum system memory on each server should be 1 GB. The
recommended amount for each server is 2 GB.
l Disk UsageThe amount of disk space required for the Double-Take program files is
approximately 85 MB. About 45 MB will be located on your /(root) partition, and the remainder will
be on your /usr partition. You will need to verify that you have additional disk space for Double-
Take queuing, logging, and so on. Additionally, on a target server, you need sufficient disk space
to store the replicated data from all connected sources, allowing additional space for growth.
Console requirements
The Replication Console can be run on any of the following operating systems.
l Windows 2008
l Windows 2003
l Windows 7
l Windows Vista
l Windows XP Service Pack 2 or later
Do not run DTSetup using the sudo command. Use a real root shell to launch DTSetup
instead, either by logging in as root on the console or by using the login session of a non-
privileged user to run su - to start a root shell.
2. The first time you run DTSetup after an installation, you will be prompted to review the Vision
Solutions license agreement. Review the agreement and accept the terms of agreement by typing
yes. You cannot use Double-Take without agreeing to the licensing terms.
3. When the DTSetup menu appears, enter the number of the menu option you want to access.
Server activation can also be completed through the Replication Console. See Licensing a
server on page 205.
If your block device being protected with DTLOOP has a file system on it, do not mount them
from /etc/fstab. They should be mounted from an init script. DTMount must be started in the boot
sequence before the script to mount the loop devices is executed in order to ensure that the loop
devices have the replicated block devices associated with them. The script should then mount
the loop device, not on the native block device.
When making replication configuration changes, stop any applications that may be running and
restart them after the replication changes have been made. Double-Take needs to be loaded on
the file system before any applications, otherwise some data may not be replicated.
After the block device replication configuration is complete, applications must read and write
through the /dev/loop# device in order for replication to work.
You can also attach and detach DTLOOP manually using the Setup Tasks, Configure Block
Device Replication, Manual Replication Configuration menu option. Changes made from
this menu are not persisted between reboots/restarts.
Changing the driver performance settings can have a positive or negative impact on server
performance. These settings are for advanced users. If you are uncertain how to best modify the
driver performance settings, contact technical support.
(false) Adaptive Throttling. This occurs when kernel memory usage exceeds the
Throttling Start Level percentage. When throttling is enabled, operations are delayed by,
at most, the amount of time set in Maximum Throttling Delay, thus reducing kernel
memory usage. Throttling stops when the kernel memory usage drops below the
Throttling Stop Level percentage.
l Toggle Forced Adaptive ThrottlingYou can toggle between enabling (true) and
disabling (false) Forced Adaptive Throttling. This causes all operations to be delayed by,
at most, the amount of time in set in Maximum Throttling Delay, regardless of the kernel
memory being used. Adaptive Throttling must be enabled (true) in order for Forced
Adaptive Throttling to work.
l Set Maximum Throttling DelayThis option is the maximum time delay, in milliseconds,
memory usage during a throttling delay. If a delay is no longer needed, the remainder of the
delay is skipped.
l Set Throttling Start LevelThrottling starts when disk writes reach the specified
percentage. This prevents the driver from stopping replication because memory has been
exhausted.
l Set Throttling Stop LevelThrottling stops when disk writes reach the specified
percentage.
l Set Memory Usage LimitThis option is the amount of kernel memory, in bytes, used for
queuing replication operations. When this limit is exceeded, the driver will send an error to
the daemon forcing a remirror of all active connections.
l Set Maximum Write Buffer SizeThis option is the maximum amount of system
memory, in bytes, allowed for a single write operation. Operations exceeding this amount
are split into separate operations in the queue.
6. After you have completed your driver performance modifications, press Q as many times as
needed to return back to the main menu or to exit DTSetup.
completely unloading the Double-Take daemon and Double-Take drivers and then
reloading them.
l Stop the running service and teardown driver configThis option stops the Double-
Configure Block Device Replication. When you press Q to exit from that menu, you will
return this menu.
4. When you have completed your starting and stopping tasks, press Q as many times as needed to
return back to the main menu or to exit DTSetup.
6. When you have completed your documentation and troubleshooting tasks, press Q as many times
as needed to return back to the main menu or to exit DTSetup.
If the Servers root is highlighted in the left pane of the Replication Console, the
Connection Wizard menu option will not be available. To access the menu, expand the
server tree in the left pane, and highlight a server in the tree.
2. The Connection Wizard opens to the Welcome screen. Review this screen and click Next to
continue.
At any time while using the Connection Wizard, click Back to return to previous screens
and review your selections.
3. If you highlighted a source in the Replication Console, the source will already be selected. If it is
not, select the Double-Take source. This is the server that you want to protect.
Double-Take will automatically attempt to log on to the selected source using previously
cached credentials. If the logon is not successful, the Logon dialog box will appear
prompting for your security identification.
Double-Take will automatically attempt to log on to the selected target using previously
cached credentials. If the logon is not successful, the Logon dialog box will appear
prompting for your security identification.
and directories to the same location on the target. The default location is /source_
name/replication_set_name/volume_name.
l Send all data to the same path on the targetThis option sends all selected volumes
the connection will be established, and mirroring and replication will begin.
l If you want to set advanced options, click Advanced Options. The Connection Wizard will
close and the Double-Take Connection Manager will open. The Servers tab will be
completed.
The default number of files that are listed in the right pane of the Replication Console is
2500, but this is user configurable. A larger number of file listings allows you to see more
files in the Replication Console, but results in a slower display rate. A smaller number of
file listings displays faster, but may not show all files contained in the directory. To change
the number of files displayed, select File, Options and adjust the File Listings slider bar
to the desired number.
To hide offline files, such as those generated by snapshot applications, select File,
Options and disable Display Offline Files. Offline files and folders are denoted by the
arrow over the lower left corner of the folder or file icon.
Be sure and verify what files can be included by reviewing the Replication capabilities on
page 146.
Replication sets should only include necessary data. Including data such as temporary
files, logs, and/or locks will add unnecessary overhead and network traffic. For example, if
you are using Samba, make sure that the location of the lock file (lock dir in samba.conf) is
not a location in your Double-Take Availability replication set.
5. After selecting the data for this replication set, right-click the new replication set icon and select
Save. A saved replication set icon will change from red to black.
6. If you need to select a block device for replication, right-click the replication set and select Add
Device.
7. The block devices configured for Double-Take Availability replication are shown by default.
Highlight the device to include in the replication set and click OK.
If the device you want to include is not displayed, you can click Show Other Devices to
view all devices which are eligible for Double-Take Availability replication. You can select
Make sure your target has a partitioned device with sufficient space. It should be equal to
or greater than the storage of the source device.
The partition size displayed may not match the output of the Linux df command. This is
because df shows the size of the mounted file system not the underlying partition which
may be larger. Additionally, Double-Take Availability uses powers of 1024 when
computing GB, MB, and so on. The df command typically uses powers of 1000 and
rounds up to the nearest whole value.
l Drag and drop the replication set onto a target. The target icon could be in the left or right
structure on the target. For example, /var/data and /usr/files on the source will be
replicated to /var/data/ and /usr/files, respectively, on the target.
l Custom LocationIf the predefined options do not store the data in a location that
is appropriate for your network operations, you can specify your own custom location
where the replicated files will be sent. Click the target path and edit it, selecting the
appropriate location.
l Start Mirror on ConnectionMirroring can be initiated immediately when the
connection is established. If mirroring is not configured to start automatically, you must start
it manually after the connection is established.
Data integrity cannot be guaranteed without a mirror being performed. This option
is recommended for the initial connection.
l Full MirrorAll files in the replication set will be sent from the source to the target.
l Difference MirrorOnly those files that are different based size or date and time
(depending on files or block devices) will be sent from the source to the target.
l Only send data if the sources date is newer than the targets dateOnly
those files that are newer on the source are sent to the target.
If you are using a database application, do not use the newer option unless
you know for certain you need it. With database applications, it is critical that
all files, not just some of them that might be newer, get mirrored.
based on date, time, and/or size is flagged as different. The mirror then performs a
checksum comparison on the flagged files and only sends those blocks that are
different.
l Differences with no ChecksumAny file that is different on the source and target
Database applications may update files without changing the date, time, or
file size. Therefore, if you are using database applications, you should use
the File Differences with checksum or Full option.
Configuration tab.
l Failover Control CenterFrom the Failover Control Center, select Settings,
Communications.
l Double-Take serverFrom the Replication Console, right-click on a server in the tree in
the left pane of the Replication Console, select Properties, and the Network tab.
2. You need to configure your hardware so that Double-Take traffic is permitted access through the
router and directed appropriately. Using the port information from the previous section, configure
your router identifying each Double-Take server, its IP address, and the Double-Take and router
ports. Also, note the following caveats.
l Since Double-Take communication occurs bidirectionally, make sure you configure your
router for both incoming and outgoing traffic for all of your Double-Take servers and
Double-Take clients.
l In addition to UDP heartbeats, Double-Take failover can use ICMP pings to determine if
the source server is online. If you are going to use ICMP pings and a router between the
source and target is blocking ICMP traffic, failover monitors cannot be created or used. In
this situation, you must configure your router to allow ICMP pings between the source and
target.
Since there are many types of hardware on the market, each can be configured differently. See
your hardware reference manual for instructions on setting up your particular router.
3. If your network is configured to propagate UDP broadcasts, your servers will be populated in the
Replication Console from across the router. If not, you have to manually insert the servers, by
selecting Insert, Server. Type the IP address of the router the server is connected to and the port
number the server is using for heartbeats.
4. Once your server is inserted in the Replication Console, you can use the Connection Wizard or
the Connection Manager to establish your connection. See Establishing a data connection using
the automated Connection Wizard on page 64 or Establishing a connection manually using the
Connection Manager on page 69.
5. The Connection Manager opens to the Servers tab. Depending on how you opened the
Connection Manager, some entries on the Servers tab will be completed already. For example, if
you accessed the Connection Manager by right-clicking on a replication set, the name of the
replication set will be displayed in the Connection Manager. Verify or complete the fields on the
Servers tab.
l Source ServerSpecify the source server that contains the replication set that is going to
a connection. Specify the replication set that will be connected to the TDU.
l Target ServerSelect the Diagnostics target.
l RouteAfter selecting the Diagnostics target, the Route will automatically be populated
will be realistic.
l Start Replication on ConnectionMake sure this option is selected so that your
Connection statistics
1. You can change the statistics that are displayed by selecting File, Options and selecting the
Statistics tab.
2. The statistics displayed in the Replication Console will be listed with check boxes to the left of
each item. Mark the check box to the left of each statistic that you want to appear, and clear the
check box to the left of each statistic that you do not want to appear.
3. The statistics appear on the Replication Console in the order they appear on the Statistics tab. If
you want to reorder the statistics, highlight the statistic to be moved and select the up or down
arrow button, to the right of the vertical scroll bar, to move the selection up or down in the list.
Repeat this process for each statistic that needs to be moved until you reach the desired order.
4. If you have made changes to the statistics list and have not yet saved them, you can go back to the
previously used settings by clicking Reset to Last. This will revert the list back to the last saved
settings.
5. To return the statistics list to the Double-Take default selection and order, click Reset to Default.
6. Click OK to apply and save any changes that have been made to the order or display of the
Replication Console statistics.
Statistics marked with an asterisk (*) are not displayed, by default.
Replication Set
Replication set indicates the name of the connected replication set.
Connection ID
The connection ID is the incremental counter used to number each connection
established. This number is reset to one each time the Double-Take service is
restarted.
Target Name
The name of the target as it appears in the server tree in the left pane of the Replication
Console. If the servers name is not in the server tree, the IP address will be displayed.
Target IP
The target IP is the IP address on the target machine where the mirroring and
replication data is being transmitted.
If the Site Monitor and Connection Monitor settings are different, at times, the icons
and color may not be synchronized between the left and right panes.
An icon with yellow and blue servers indicates a server that is working properly.
A red X on a server icon indicates the Replication Console cannot communicate with that
server or that is a problem with one of the servers connections. If the connection background is
gray, it is a communication issue. If the connection also has a red X, it is a connection issue.
A red tree view (folder structure) on a server icon indicates a restore is required because of a
failover.
The following icons and colors are displayed in the right pane when a server is highlighted in the
left pane.
l Replication Consolemc
l Series NumberThe series number ranges from 1 to 999. For example, Double-Take begins
logging messages to dtlog1. When this file reaches its maximum size, the next log file will be
written to dtlog2. As long as log messages continue to be written, files dtlog3, dtlog4, dtlog5 will be
opened and filled. When the maximum number of files is reached, which by default is 5, the oldest
file is deleted when the sixth file is created. For example, when dtlog6 is created, dtlog1 is deleted
and when dtlog7 is created, dtlog2 is deleted. When file dtlog999 is created and filled, dtlog1 will
be re-created and Double-Take will continue writing log messages to that file. In the event that a
file cannot be removed, its number will be kept in the list, and on each successive file remove, the
log writer will attempt to remove the oldest file in the list.
l ExtensionThe extension for each log file is .dtl.
l Double-Takedtlog1.dtl, dtlog2.dtl
Window.
l Select the Message Window icon from the toolbar.
l Select Monitor, New Message Window and identify the Server that you want to monitor.
The message window is limited to the most recent 1000 lines. If any data is missing an
entry in red will indicate the missing data. Regardless of the state of the message window,
all data is maintained in the Double-Take log on the server.
l Select Monitor, the name of the message window, and the appropriate control.
Close
Closes the message window
Clear
Clears the message window
Pause/Resume
Pauses and resumes the message window.
Pausing prevents new messages from being displayed in the message
window so that you are not returned to the bottom of the message window
every time a new message arrives. The messages that occur while the
window is logged are still logged to the Double-Take log file.
Resuming displays the messages that were held while the window was
paused and continues to display any new messages.
Pausing is automatically initiated if you scroll up in the message window.
The display of new log messages will automatically resume when you
scroll back to the bottom.
Copy
Allows you to copy selected text
Options
This control is only available from the Monitor menu. Currently, there are
no filter options available so this option only allows you to select a different
server. In the future, this control will allow you to filter which messages to
display.
4. To change which server you are viewing messages for, select a different machine from the drop
down list on the toolbar. If necessary, the login process will be initiated.
5. To move the message window to other locations on your desktop, click and drag it to another area
or double-click it to automatically undock it from the Replication Console.
If you change the Maximum Length or Maximum Files, you must restart the
Double-Take daemon for the change to take effect.
In this information, con_id refers to the unique connection ID assigned to each connection
between a source replication set and a target.
There are several log messages with the ID of 0. See the description in the Message column in
the log file.
1 This evaluation period has expired. Mirroring and replication have been stopped. To obtain
a license, please contact your vendor.
ErrorContact your vendor to purchase either a single or site license.
2 The evaluation period expires in %1 day(s).
InformationContact your vendor before the evaluation period expires to purchase
either a single or site license.
3 The evaluation period has been activated and expires in %1 day(s).
InformationContact your vendor before the evaluation period expires to purchase
either a single or site license.
4 Duplicate license keys detected on machine %1 from machine %2.
WarningIf you have an evaluation license or a site license, no action is necessary. If
you have a single license, you must purchase either another single license or a site
license.
5 This product edition can only be run on Windows Server or Advanced Server running the
Server Appliance Kit.
ErrorVerify your license key has been entered correctly and contact technical
support.
3000 Logger service was successfully started.
InformationNo action required.
3001 Logger service was successfully stopped.
InformationNo action required.
4000 Kernel was successfully started.
InformationNo action required.
4001 Target service was successfully started.
InformationNo action required.
4002 Source service was successfully started.
InformationNo action required.
Any specified notification settings are retained when Enable notification is disabled.
If you do not specify an SMTP server, Double-Take Availability will attempt to use
the Linux mail command. The success will depend on how the local mail system is
configured. Double-Take Availability will be able to reach any address that the mail
command can reach.
l Log on to SMTP ServerIf your SMTP server requires authentication, enable Log on
to SMTP Server and specify the Username and Password to be used for authentication.
Your SMTP server must support the LOGIN authentication method to use this feature. If
your server supports a different authentication method or does not support authentication,
you may need to add the Double-Take Availability server as an authorized host for relaying
e-mail messages. This option is not necessary if you are sending exclusively to e-mail
addresses that the SMTP server is responsible for.
l From AddressSpecify the e-mail address that you want to appear in the From field of
each Double-Take Availability e-mail message. The address is limited to 256 characters.
l Send ToSpecify the e-mail address that each Double-Take Availability e-mail message
should be sent to and click Add. The e-mail address will be inserted into the list of
addresses. Each address is limited to 256 characters. You can add up to 256 e-mail
addresses. If you want to remove an address from the list, highlight the address and click
Remove. You can also select multiple addresses to remove by Ctrl-clicking.
l Subject Prefix and Add event description to subjectThe subject of each e-mail
notification will be in the format Subject Prefix : Server Name : Message Severity : Message
ID : Message Description. The first and last components (Subject Prefix and Message
Description) are optional. The subject line is limited to 150 characters.
If desired, enter unique text for the Subject Prefix which will be inserted at the front of the
subject line for each Double-Take Availability e-mail message. This will help distinguish
Double-Take Availability messages from other messages. This field is optional.
If desired, enable Add event description to subject to have the description of the
message appended to the end of the subject line. This field is optional.
l Filter ContentsSpecify which messages that you want to be sent via e-mail. Specify
Information, Warning, and/or Error. You can also specify which messages to exclude
based on the message ID. Enter the message IDs as a comma or semicolon separated list.
You can indicate ranges within the list.
You can test e-mail notification by specifying the options on the E-mail Notification
tab and clicking Test. If desired, you can send the test message to a different e-mail
address by selecting Send To and entering a comma or semicolon separated list of
addresses. Modify the message text up to 1024 characters, if necessary. Click
Send to test the e-mail notification. The results will be displayed in a message box.
=================================
0/11/10 12:48:05:2040
=================================
SYSTEMALLOCATOR::Total Bytes: 0
IQALLOCATOR::Total Bytes: 0
SECURITY::Logins : 1 FailedLogins : 0
KERNEL::SourceState: 2 TargetState: 1 Start Time: Tue Sep 11 12:45:26 2007
RepOpsGenerated: 436845 RepBytesGenerated: 0
MirOpsGenerated: 3316423 MirBytesGenerated: 108352749214952
FailedMirrorCount: 0 FailedRepCount: 0
ActFailCount: 0 TargetOpenHandles: 0 DriverQueuePercent: 0
TARGET:: PeerAddress: 10.10.1.104 LocalAddress: 10.10.1.104
Ops Received: 25 Mirror Ops Received: 23
Retries: 0 OpsDropped: 0 Ops Remaining: 0
Orphan Files Removed: 0 Orphan Directories Removed: 0 Orphan Bytes Removed: 0
Bytes In Target Queue: 0 Bytes In Target Disk Queue: 0
TasksSucceeded: 0 TasksFailed: 0 TasksIgnored: 0
SOURCE::autoDisConnects : 0 autoReConnects : 1
lastFileTouched : /log/data_file
CONNECTION:: conPeerAddress: 10.10.1.104
connectTime: Tue Sep 11 12:45:34 2007
conState: 1 conOpsInCmdQueue: 0 conOpsInAckQueue: 0
conOpsInRepQueue: 0 conOpsInMirQueue: 0 conBytesInRepQueue: 0
conOpsTx: 27 conBytesInMirQueue: 0 conBytesTx: 14952687269
conBytesCompressedTx: 14952
conOpsRx: 201127 conBytesRx: 647062280 conResentOpCount: 0 conBytesInDiskQueue: 0
conBandwidthLimit: 429496295 conBytesSkipped: 22867624 conMirrorBytesRemain: 0
conMirrorPercent: 100.0%
conTaskCmdsSubmitted: 0 conTaskCmdsQueued: 0
conTasksSucceeded: 0 conTasksFailed: 0 conTasksIgnored: 0
3. At the top of the tab, specify the Folder where the log files for messages, alerts, verification, and
statistics will be saved.
4. Under Statistics, specify the following information.
l FilenameThe name of the statistics log file. The default file name is statistic.sts.
l Maximum LengthThe maximum length of the statistics log file. The default maximum
length is 10 MB. Once this maximum has been reached, Double-Take begins overwriting
the oldest data in the file.
l Write IntervalThe frequency in which Double-Take writes the statistical data to the
Command
DTSTAT
Description
Starts the DTStats statistics logging utility from a command prompt
Syntax
DTSTAT [-p][-i <interval>][-t <filename>] [-f <filename>] [-s <filename>] [-st
<filename>][-IP <address>] [-START <mm/dd/yyyy hh:mm>][-STOP
<mm/dd/yyyy hh:mm>] [-SERVER <ip_address> <port_number>]
Options
l -pDo not print the output to the screen
l -i intervalRefresh from shared memory every interval seconds
l -t filenameSave the data from memory to the specified binary file filename
l -f filenameReads from a previously saved binary file, filename, that was
generated using the -t option instead of reading from memory
l -s filenameSaves only the connection data from the data in memory to an ASCII,
comma-delimited file, filename
l -st filenameSaves only the target data from the data in memory to an ASCII,
comma-delimited file, filename
l -f filename1 -s filename2Saves only the connection data from a previously
saved binary file, filename1, to an ASCII, comma-delimited file, filename2
l -f filename1 -st filename2Saves only the target data from a previously saved
binary file, filename1, to an ASCII, comma-delimited file, filename2
l -IP addressFilters out the specified address in the IP address field and prints only
those entries. Specify more than one IP address by separating them by a comma.
l -START mm/dd/yyyy hh:mmFilters out any data prior to the specified date and
time
l -STOP mm/dd/yyyy hh:mmFilters out any data after the specified date and time
l -SERVER ip_address port_numberConnects DTStat to the specified IP
address using the specified port number instead of to the local machine
Examples
l DTStat -i 300
l DTStat -p -i 300 -t AlphaStats.sts
l DTStat -f AlphaStats.sts -s AlphaStats.csv -start 02/02/2007 09:25
l DTStat -server 206.31.4.51 1106
The categories you see will depend on the function of your server (source, target, or both).
If you have multiple IP addresses connected to one target server, you will see multiple Target
sections for each IP address.
If you convert your statistics output to an ASCII, comma-delimited file using the dtstat -s option,
keep in mind the following differences.
l The statistic labels will be slightly different in the ASCII file than in the following table.
l The statistics will appear in a different order in the ASCII file than in the following table.
l The statistics in the Target Category in the following table are not included in the ASCII file.
l The Kernel statistic Target Open Handles is not included in the ASCII file.
l The ASCII file contains a Managed Pagefile Alloc statistic which is no longer used.
Date/Time Stamp
The date and time that the snapshot was taken. This is the date and time that each
statistic was logged. By default, these are generated once a second, as long as there
are statistics being generated. If mirroring/replication is idle, then DTStat will be idle as
well.
System Allocator, Total Bytes
The number of bytes currently allocated to the system pagefile
IQAllocator, Total Bytes
The number of bytes currently allocated to the intermediate queue
Security, Logins
The number of successful login attempts
Security, Failed Logins
The number of failed login attempts
Kernel, SourceState
l 0Source is not running
l 1Source is running without the replication driver
l 2Source is running with the replication driver
If your snmpd.conf file is empty, add the following content to the file, where <SNMP
Manager IP> is the IP address where the SNMP manager is being run.
Kernel, dttrapKernelStarted
Double-Take has started
Kernel, dttrapKernelStopped
Double-Take has stopped
License, dttrapLicenseViolationStartingSource
The source cannot be started due to a license violation
License, dttrapLicenseViolationOnNetwork
A Double-Take serial number conflict was identified on the network
Source, dttrapSourceStarted
Double-Take source component has started
Source, dttrapSourceStopped
Double-Take source component has stopped
Target, dttrapTargetStarted
Double-Take target component has started
Target, dttrapTargetStopped
Double-Take target component has stopped
Connection, dttrapConnectionRequested
The source has requested a connection to the target
Connection, dttrapConnectionRequestReceived
The target has received a connection request from the source
Connection, dttrapConnectionSucceeded
The source to target connection has been established
Connection, dttrapConnectionPause
The source to target connection has paused
Connection, dttrapConnectionResume
The source to target connection has resumed
General, dtUpTime
Time in seconds since Double-Take was last started
General, dtCurrentMemoryUsage
Amount of memory allocated from the Double-Take memory pool
General, dtMirOpsGenerated
The number of mirror operations (create, modify, or delete) that have been transmitted
by the mirroring process
General, dtMirBytesGenerated
The number of bytes that have been transmitted by the mirroring process
General, dtRepOpsGenerated
The number of operations (create, modify, or delete) that have been transmitted by the
replication process
General, dtRepBytesGenerated
The number of bytes that have been transmitted by the replication process
General, dtFailedMirrorCount
The number of operations that failed to mirror because they could not be read on the
source
General, dtFailedRepCount
The number of operations that failed to be replicated because they could not be read on
the source
General, dtActFailCount
The number of license key errors
General, dtAutoDisCount
The number of auto-disconnects
General, dtAutoReCount
The number of auto-reconnects
General, dtDriverQueuePercent
The amount of throttling calculated as a percentage of the stop replicating limit
1. If data cannot immediately be transmitted to the target, it is stored, or queued, in system memory.
You can configure how much system memory you want to use for queuing. By default, 128 or 512
MB of memory is used, depending on your operating system.
2. When the allocated amount of system memory is full, new changed data bypasses the full system
memory and is queued directly to disk. Data queued to disk is written to a transaction log. Each
transaction log can store 5 MB worth of data. Once the log file limit has been reached, a new
transaction log is created. The logs can be distinguished by the file name which includes the target
IP address, the Double-Take port, the connection ID, and an incrementing sequence number.
You may notice transaction log files that are not the defined size limit. This is because data
operations are not split. For example, if a transaction log has 10 KB left until the limit and
the next operation to be applied to that file is greater than 10 KB, a new transaction log file
will be created to store that next operation. Also, if one operation is larger than the defined
size limit, the entire operation will be written to one transaction log.
3. When system memory is full, the most recent changed data is added to the disk queue, as
described in step 2. This means that system memory contains the oldest data. Therefore, when
data is transmitted to the target, Double-Take pulls the data from system memory and sends it.
This ensures that the data is transmitted to the target in the same order it was changed on the
source. Double-Take automatically reads operations from the oldest transaction log file into
system memory. As a transaction log is depleted, it is deleted. When all of the transaction log files
are deleted, data is again written directly to system memory (step 1).
l FolderThis is the location where the disk queue will be stored. Double-Take Availability
displays the amount of free space on the volume selected. Any changes made to the queue
location will not take effect until the Double-Take daemon has been restarted on the server.
Select a location on a volume that will have minimal impact on the operating system and
applications being protected. For best results and reliability, this should be a dedicated,
non-boot volume. The disk queue should not be on the same physical or logical volume as
the data being replicated.
l Maximum system memory for queueThis is the amount of system memory, in MB,
that will be used to store data in queues. When exceeded, queuing to disk will be triggered.
This value is dependent on the amount of physical memory available but has a minimum of
32 MB. By default, 128 MB of memory is used. If you set it lower, Double-Take Availability
will use less system memory, but you will queue to disk sooner which may impact system
performance. If you set it higher, Double-Take Availability will maximize system
performance by not queuing to disk as soon, but the system may have to swap the memory
to disk if the system memory is not available.
Since the source is typically running a production application, it is important that the amount
of memory Double-Take Availability and the other applications use does not exceed the
amount of RAM in the system. If the applications are configured to use more memory than
there is RAM, the system will begin to swap pages of memory to disk and the system
performance will degrade. For example, by default an application may be configured to use
all of the available system memory when needed, and this may happen during high-load
operations. These high-load operations cause Double-Take Availability to need memory to
queue the data being changed by the application. In this case, you would need to configure
the applications so that they collectively do not exceed the amount of RAM on the server.
Perhaps on a server with 1 GB of RAM running the application and Double-Take
Availability, you might configure the application to use 512 MB and Double-Take Availability
to use 256 MB, leaving 256 MB for the operating system and other applications on the
system. Many server applications default to using all available system memory, so it is
important to check and configure applications appropriately, particularly on high-capacity
servers.
Any changes to the memory usage will not take effect until the Double-Take daemon has
been restarted on the server.
l Maximum disk space for queueThis is the maximum amount of disk space, in MB, in
the specified Folder that can be used for Double-Take Availability disk queuing, or you can
select Unlimited which will allow the queue usage to automatically expand whenever the
available disk space expands. When the disk space limit is reached, Double-Take
Availability will automatically begin the auto-disconnect process. By default, Double-Take
Availability will use an unlimited amount of disk space. Setting this value to zero (0) disables
disk queuing.
l Minimum Free SpaceThis is the minimum amount of disk space in the specified Folder
that must be available at all times. By default, 50 MB of disk space will always remain free.
The Minimum Free Space should be less than the amount of physical disk space minus
Maximum disk space for queue.
l Alert at following queue usage percentageThis is the percentage of the disk queue
that must be in use to trigger an alert message in the Double-Take Availability log. By
default, the alert will be generated when the queue reaches 50%.
5. Click OK to save the settings.
If you have changed data on the target while not failed over, for example if you were
testing data on the target, Double-Take is unaware of the target data changes. You must
manually remirror your data from the source to the target, overwriting the target data
changes that you caused, to ensure data integrity between your source and target.
3. Verify that the check box Automatically Reconnect During Source Initialization is marked to
enable the auto-reconnect feature.
4. Click OK to save the settings.
If you have multiple connections to the same target, all connections will be paused and resumed.
If a connection is disconnected and the target is monitoring the source for failover, you will be
prompted if you would like to continue monitoring for a failure. If you select Yes, the Double-
Take connection will be disconnected, but the target will continue monitoring the source. To
make modifications to the failure monitoring, you will need to use the Failover Control Center. If
you select No, the Double-Take connection will be disconnected, and the source will no longer
be monitored for failure by the target.
If a connection is disconnected while large amounts of data still remain in queue, the Replication
Console may become unresponsive while the data is being flushed. The Replication Console
will respond when all of the data has been flushed from the queue.
l File differencesOnly those files that are different based size or date and time will be
sent from the source to the target. Expand File difference mirror options compared below to
see how the file difference mirror settings work together, as well as how they work with the
global checksum setting on the Source tab of the Server Properties.
l Send data only if Source is newer than TargetOnly those files that are newer on the
If you are using a database application, do not use the newer option unless you
know for certain you need it. With database applications, it is critical that all files, not
just some of them that might be newer, get mirrored.
l Use block checksumFor those files flagged as different, the mirror performs a
checksum comparison and only sends those blocks that are different.
l Calculate Replication Set size prior to mirrorDetermines the size of the replication set
prior to starting the mirror. The mirroring status will update the percentage complete if the
replication set size is calculated.
date, time, and/or size is flagged as different. The mirror then performs a checksum
comparison on the flagged files and only sends those blocks that are different.
l Checksum All enabledThe mirror performs a checksum comparison on all files and
Auto-remirror is a per source option. When enabled, all connections from the source will perform
an auto-remirror after an auto-reconnect. When disabled, none of the connections from the
source will perform an auto-remirror after an auto-reconnect.
1. Right-click a server in the left pane of the Replication Console and select Properties.
2. Select the Setup tab.
3. Verify that the Perform Remirror After Auto-Reconnect check box is selected to initiate an
auto-remirror after an auto-reconnect.
date, time, and/or size is flagged as different. The mirror then performs a checksum
comparison on the flagged files and only sends those blocks that are different.
l Differences with no ChecksumAny file that is different on the source and target based
Database applications may update files without changing the date, time, or file size.
Therefore, if you are using database applications, you should use the Differences
with checksum or Full option.
Orphan file configuration is a per target option. All connections to the same target will have the
same orphan file configuration.
If Double-Take is configured to move orphan files, the Double-Take log file will indicate that
orphan files have been deleted even though they have actually been moved. This is a reporting
issue only.
If delete orphans is enabled, carefully review any replication set rules that use wildcard
definitions. If you have specified wildcards to be excluded from your replication set, files
matching those wildcards will also be excluded from orphan file processing and will not be
deleted from the target. However, if you have specified wildcards to be included in your
replication, those files that fall outside the wildcard inclusion rule will be considered orphans and
will be deleted from the target.
1. If you want to preview which files are identified as orphan files, right-click an established
connection and select Remove Orphans, Preview. Check the log file on the target for the list of
orphaned files.
2. If you want to remove orphan files manually, right-click an established connection and select
Remove Orphans, Start.
3. If you want to stop the process after it has been started, right-click the connection and select
Remove Orphans, Stop.
4. To configure orphan files for processing during a mirror, verify, or restore, use the following
instructions.
a. Right-click the connection on the right pane of the Replication Console and select
Connection Manager.
b. Select the Orphans tab.
If you are moving or deleting orphan files, select a move location outside of the
replication set. If you select the location where the files are currently located, the
files will be deleted. If you select another location inside the replication set, the files
will be moved multiple times and then possibly deleted.
f. Specify if you want to Remove All Orphans or Remove Orphans not modified within
the following time period. If you select the time-based option, only orphans older than
the time you specify will be removed.
g. Click OK to save the settings.
replication set data, that link will be created on the target as a regular directory if it must be
created as part of the target path.
l If a soft link exists in a replication set (or is moved into a replication set) and points to a file or
directory inside the replication set, Double-Take will remap the path contained in that link
based on the Double-Take target path when the option RemapLink is set to the default
value (1). If RemapLink is set to zero (0), the path contained in the link will retain its original
mapping.
l If a soft link exists in a replication set (or is moved into a replication set) and points to a file or
directory outside the replication set, the path contained in that link will retain its original
mapping and is not affected by the RemapLink option.
l If a soft link is moved out of or deleted from a replication set on the source, that link will be
copies the file that the link pointed to rather than the link itself, then Double-Take replicates
the file copied by the operating system to the target. If the operating system does not follow
the link, only the link is copied.
l If a soft link to a directory is copied into a replication set on the source and the operating
system copies the directory and all of its contents that the link pointed to rather than the link
itself, then Double-Take replicates the directory and its contents copied by the operating
system to the target. If the operating system does not follow the link, only the link is copied.
l If any operating system commands, such as chmod or chown, is directed at a soft link on
the source and the operating system redirects the action to the file or directory which the
link references, then if the file or directory referenced by the link is in a replication set, the
operation will be replicated for that file to the target.
l The operating system redirects all writes to soft links to the file referenced by the link.
Therefore, if the file referenced by the symbolic link is in a replication set, the write operation
will be replicated to the target.
locations outside the replication set, the linked file will be mirrored to the target for all
locations and those locations will be linked if all link locations on the target exist on the same
partition.
l If a hard link crosses the boundaries of a replication set on the source, having locations both
inside and outside the replication set, the linked file will be mirrored to the target for only
those locations inside the replication set on the source, and those locations will be linked on
the target if all link locations exist on the same partition.
l If a hard link is created on the source linking a file outside the replication set to a location
inside the replication set, the linked file will be created on the target in the location defined
by the link inside the replication set and will be linked to any other locations for that file which
exist inside the replication set.
l If any hard link location is moved from outside the replication set into the replication set on
the source, the link will not be replicated to the target even if other link locations already
exist inside the replication set, but the linked file will be created on the target in the location
defined by the link.
l If any hard link location existing inside the replication set is moved within the replication set
on the source, the move will be replicated to the target and the link will be maintained if the
new link location does not cross partitions in the target path.
l If any hard link location existing inside the replication set is moved out of the replication set,
location inside the replication set on the source, the copy will be replicated to the target.
l If a hard linked file has a location in the replication set and any of the operating system
commands, such as chmod or chown, are directed at that file from a location inside the
replication set, the modification to the file will be replicated to the target. Operations on hard
links outside of the replication set are not replicated.
l If a hard linked file has a location in the replication set and a write operation is directed at
that file from inside the replication set, the write operation will be replicated to the target.
Operations on hard links outside of the replication set are not replicated.
l If any hard link location existing inside the replication set is deleted on the source, that file or
longer than 4094 characters will be skipped and logged to the Double-Take log file and the
Linux system log.
l Do not name replication sets or select a target location using illegal characters. Illegal
l question mark ?
l colon :
l asterisk *
rules. When replication sets are modified, a generation number is associated with the
modifications. The generation number is incremented anytime the modifications are saved,
but the save is not allowed if there is a mismatch between the generation number on the
source and the Replication Console. You will be notified that the replication set could not be
saved. This error checking safeguards the replication set data in the event that more than
one client machine is accessing the sources replication sets.
l Double-Take will not replicate the same data from two different replication sets on your
source. The data will only be replicated from one of the replication sets. If you need to
replicate the same data more than once, connect the same replication set to multiple
targets.
l If you rename the root folder of a connected replication set, Double-Take interprets this
operation as a move from inside the replication set to outside the replication set. Therefore,
since all of the files under that directory have been moved outside the replication set and
are no longer a part of the replication set, those files will be deleted from the target copy of
the replication set. This, in essence, will delete all of your replicated data from the target. If
you have to rename the root directory of your replication set, make sure that the replication
set is not connected.
l When creating replication sets, keep in mind that when recursive rules have the same type
(include or exclude) and have the same root path, the top level recursive rule will take
precedence over lower level non-recursive rules. For example, if you have /var/data
included recursively and /var/data/old included nonrecursively, the top level rule, /var/data/,
will take precedence and the rule /var/data/old will be discarded. If the rules are different
types (for example, /var/data is included and /var/data/old is excluded), both rules will be
applied as specified.
l Virus protection
l Virus protection software on the target should not scan replicated data. If the data is
protected on the source, operations that clean, delete, or quarantine infected files will be
replicated to the target by Double-Take. If the replicated data on the target must be
scanned for viruses, configure the virus protection software on both the source and target
to delete or quarantine infected files to a different directory that is not in the replication set. If
the virus software denies access to the file because it is infected, Double-Take will
continually attempt to commit operations to that file until it is successful, and will not commit
any other data until it can write to that file.
The default number of files that are listed in the right pane of the Replication Console is
2500, but this is user configurable. A larger number of file listings allows you to see more
files in the Replication Console, but results in a slower display rate. A smaller number of
file listings displays faster, but may not show all files contained in the directory. To change
the number of files displayed, select File, Options and adjust the File Listings slider bar
to the desired number.
To hide offline files, such as those generated by snapshot applications, select File,
Options and disable Display Offline Files. Offline files and folders are denoted by the
arrow over the lower left corner of the folder or file icon.
Be sure and verify what files can be included by reviewing the Replication capabilities on
page 146.
Replication sets should only include necessary data. Including data such as temporary
files, logs, and/or locks will add unnecessary overhead and network traffic. For example, if
you are using Samba, make sure that the location of the lock file (lock dir in samba.conf) is
not a location in your Double-Take Availability replication set.
5. After selecting the data for this replication set, right-click the new replication set icon and select
Save. A saved replication set icon will change from red to black.
6. If you need to select a block device for replication, right-click the replication set and select Add
Device.
7. The block devices configured for Double-Take Availability replication are shown by default.
Highlight the device to include in the replication set and click OK.
If the device you want to include is not displayed, you can click Show Other Devices to
view all devices which are eligible for Double-Take Availability replication. You can select
Make sure your target has a partitioned device with sufficient space. It should be equal to
or greater than the storage of the source device.
The partition size displayed may not match the output of the Linux df command. This is
because df shows the size of the mounted file system not the underlying partition which
may be larger. Additionally, Double-Take Availability uses powers of 1024 when
computing GB, MB, and so on. The df command typically uses powers of 1000 and
rounds up to the nearest whole value.
l IncInclude indicates that the specified path is to be included in the files sent to the target
l ExcExclude indicates that the specified path is not to be included in the files
sent to the target
If the device you want to include is not displayed, you can click Show Other Devices to
view all devices which are eligible for Double-Take replication. You can select any of
these devices, but you cannot use them for Double-Take replication until they are
configured for Double-Take replication. The status no dtloop indicates the device is not
configured for Double-Take replication.
Make sure your target has a partitioned device with sufficient space. It should be equal to
or greater than the storage of the source device.
The partition size displayed may not match the output of the Linux df command. This is
because df shows the size of the mounted file system not the underlying partition which
may be larger. Additionally, Double-Take uses powers of 1024 when computing GB, MB,
and so on. The df command typically uses powers of 1000 and rounds up to the nearest
whole value.
If you save changes to a connected replication set, it is recommended that you perform a
mirror to guarantee data integrity between the source and target machines. A dialog box
will appear instructing you to disconnect and reconnect the replication set and perform a
difference mirror.
If you save changes to a connected replication set, it is recommended that you perform a mirror
to guarantee data integrity between the source and target machines. A dialog box will appear
instructing you to disconnect and reconnect the replication set and perform a difference mirror.
3. If the replication set size has never been determined, click Calculate. If the replication set has
previously been determined, the button will be labeled Recalculate. Depending on user activity,
the size shown may not accurately reflect the current size of the replication set. If changes are
occurring to files in the replication set while the calculation is being made, the actual size may
differ slightly. The amount of data is determined at the exact time the calculation is made.
4. Click OK to return to the Replication Console.
You can also configure the replication set calculation when establishing a connection
through the Connection Manager by selecting Calculate Replication Set size on
connection on the Mirroring tab.
If your replication set contains a large number of files, for example, ten thousand or more,
you may want to disable the calculation of the replication set size so that data will start
If you disable this option on a source server, you can still submit tasks to be processed on a
target, although task command processing must be enabled on the target.
Differences in files on the source and target should be expected for files and applications that
are in use during the verification process.
l Verify onlyThis option verifies the data and generates a verification log, but it does not
remirror any files that are different on the source and target.
l Remirror data to the target automaticallyThis option verifies the data, generates a
verification log, and remirrors to the target any files that are different on the source.
l Only if the sources date is newer than the targetsIf you are remirroring your files,
you can specify that only files that are newer on the source than the target be remirrored.
If you are using a database application, do not use the newer option unless you
know for certain you need it. With database applications, it is critical that all files, not
just some of them that might be newer, get mirrored.
l Use Checksum comparison to send minimal blocks of dataSpecify if you want the
verification process to use a block checksum comparison to determine which blocks are
different. If this option is enabled, only those blocks (not the entire files) that are different will
be identified in the log and/or remirrored to the target.
Database applications may update files without changing the date, time, or file size.
Therefore, if you are using database applications, you should use the block
checksum comparison to ensure proper verification and remirroring.
3. Specify when you want to start the initial verification. Select the immediate date and time by
clicking Now, or enter a specific Date and Time. The down arrow next to Date displays a
calendar allowing easy selection of any date. Time is formatted for any AM or PM time.
4. Mark the Reverification Interval check box to repeat the verification process at the specified
interval. Specify an amount of time and choose minutes, hours, or days.
5. Select if you want to Remirror data to the target automatically. When enabled, Double-Take
will verify the data, generate a verification log, and remirror to the target any files that are different
on the source. If disabled, Double-Take will verify the data and generate a verification log, but no
files will be remirrored to the target.
6. If you are remirroring your files, you can specify Only send data if sources date is newer than
the targets date so that only files that are newer on the source than on the target are remirrored.
7. Specify if you want the verification process to Use Checksum to send minimal blocks of data
to determine which blocks are different. If this option is enabled, only those blocks (not the entire
files) that are different will be identified in the log and/or remirrored to the target.
Database applications may update files without changing the date, time, or file size.
Therefore, if you are using database applications, you should use the block checksum
comparison to ensure proper verification and remirroring.
3. At the top of the window, Folder identifies the location where the log files identified on this tab are
stored. By default, the log files are stored in the same directory as the Double-Take program files.
4. Under the Verification section, Filename contains the base log file name for the verification
process. The replication set name will be prepended to the base log file name. For example, since
the default is DTVerify.log, the verification log for the replication set called UserData would be
UserData DTVerify.log.
5. Specify the Maximum Length of the log file. The default is 1048576 bytes (1 MB). When the log
file reaches this limit, no additional data will be logged.
6. By default, the log is appended to itself each time a verification process is completed. Clear the
Changes made to the verification log in the Server Properties, Logging tab will apply to
all connections from the current source machine.
7. Specify the Language of the log file. Currently, English is the only available language.
8. Click OK to save the settings.
In the log file, each verification process is delineated by beginning and end markers. A list of files that are
different on the source and target is provided as well cumulative totals for the verification process. The
information provided for each file is the state of its synchronization between the source and the target at
the time the file is verified. If the remirror option is selected so that files that are different are remirrored,
the data in the verify log reflects the state of the file before it is remirrored, and does not report the state
of the file after it is remirrored. If a file is reported as different, review the output for the file to determine
what is different.
Double-Take checks the schedule once every second, and if a user-defined criteria is met,
transmission will start or stop, depending on the option specified.
Any replication sets from a source connected to the same IP address on a target will share the
same scheduled transmission configuration.
1. Right-click the connection on the right pane of the Replication Console and select Connection
Manager.
2. Select the Transmit tab. The Transmit tab contains four limit types: Bandwidth, Start, Stop,
and Window. The transmission options for each limit type are displayed by highlighting a
selection in the Limit Type box.
At the top of the Transmit tab dialog box, the Enable Transmission Limiting check box allows
you to turn the transmission options on or off. You can enable the transmission options by marking
the Enable Transmission Limiting check box when you want the options to be applied, but you
can disable the transmission options, without losing the settings, by clearing that check box.
Also at the top of the Transmit tab dialog box, the Clear All button, when selected, will remove all
transmission limitations that have been set under any of the limit types. The Clear button will clear
the settings only for the Limit Type selected.
3. When you schedule transmission start criteria, transmission will start when the criteria is met and
will continue until the queue is empty or a transmission stop criteria is met. Select the Start
option in the Limit Type box.
Define the start options for Double-Take transmission by using any combination of the following
options.
A Transmission Session Start setting will override any other start criteria. For
example, if you set the Transmission Session Start and the Queue Threshold,
transmission will not start until you reach the indicated start time.
4. Schedule any desired stop criteria to stop transmission after a transmission start criteria has
initiated the transmission. If you do not establish a stop criteria, transmission will end when the
queue is empty. Select the Stop option in the Limit Type box.
Define the stop options to stop Double-Take transmissions by using either or both of the following
options.
l Time LimitThe time limit specifies the maximum length of time for each transmission
period. Any data that is not sent during the specified time limit remains on the source queue.
When used in conjunction with the session interval start option, you can explicitly define
how often data is transmitted and how long each transmission lasts. Specify the maximum
length of time that Double-Take can continue transmitting by indicating a length of time and
The transmission start and stop criteria should be used in conjunction with each
other. For example, if you set the Queue Threshold equal to 10 MB and the Byte
Limit equal to 10 MB, a network connection will be established when there is 10
MB of data in the queue. The data will be transmitted and when the 10 MB Byte
Limit is reached, the network connection closes. This is useful in configurations
where metered charges are based on connection time.
Setting a transmission window by itself is not sufficient to start a transmission. You still
need to set a start criteria within the window.
Define a window to control Double-Take transmissions by enabling the feature and then
specifying both window options.
Double-Take checks the schedule once every second, and if a user-defined criteria is met,
transmission will start or stop, depending on the option specified.
Any replication sets from a source connected to the same IP address on a target will share the
same scheduled transmission configuration.
1. Right-click the connection on the right pane of the Replication Console and select Connection
Manager.
2. Select the Transmit tab. The Transmit tab contains four limit types: Bandwidth, Start, Stop,
and Window. The transmission options for each limit type are displayed by highlighting a
selection in the Limit Type box.
At the top of the Transmit tab dialog box, the Enable Transmission Limiting check box allows
you to turn the transmission options on or off. You can enable the transmission options by marking
the Enable Transmission Limiting check box when you want the options to be applied, but you
can disable the transmission options, without losing the settings, by clearing that check box.
Also at the top of the Transmit tab dialog box, the Clear All button, when selected, will remove all
transmission limitations that have been set under any of the limit types. The Clear button will clear
the settings only for the Limit Type selected.
3. Select the Bandwidth option in the Limit Type box. Mark the Limit Bandwidth check box to
enable the bandwidth limiting features. Define the bandwidth available for Double-Take
transmission by using either of the following options.
The only value that is persistently stored is the number of kilobits per second. When
the page is refreshed, the percentage and available bandwidth capacity may not be
the same value that you entered. Double-Take changes these values to the
maximum values for the smallest possible link.
Any replication sets from a source connected to the same IP address on a target will share the
same compression configuration.
Keep in mind that the process of compressing data impacts processor usage on the source. If you notice
an impact on performance while compression is enabled in your environment, either adjust to a lower
level of compression, or leave compression disabled. Use the following guidelines to determine whether
you should enable compression:
l If data is being queued on the source at any time, consider enabling compression.
l If the server CPU utilization is averaging over 85%, be cautious about enabling compression.
l The higher the level of compression, the higher the CPU utilization will be.
l Do not enable compression if most of the data is inherently compressed. Many image (.jpg, .gif)
and media (.wmv, .mp3, .mpg) files, for example, are already compressed. Some images files,
such as .bmp and .tif, are uncompressed, so enabling compression would be beneficial for those
types.
l Compression may improve performance even in high-bandwidth environments.
l Do not enable compression in conjunction with a WAN Accelerator. Use one or the other to
compress Double-Take data.
Use the following instructions for setting compression.
1. Right-click the connection on the right pane of the Replication Console and select Connection
Manager.
2. Select the Compression tab.
l From the Windows desktop, select Start, Programs, Double-Take for Linux,
If the target you need is not listed, click Add Target and manually enter a name or IP
address (with or without a port number). You can also select the Browse button to search
for a target machine name. Click OK to select the target machine and return to the
Failover Control Center main window.
l Type the name of the machine that you want to monitor in Machine Name(s) and click OK.
l Click Custom. Enter the name of the server and click Add. Specify the IP address and
subnet mask of the specified server and click OK. Click OK again.
The Insert Source Machine dialog closes and the Monitor Settings dialog remains open with your
source listed in the Names to Monitor tree.
6. In the Names to Monitor tree, locate and select the IP addresses on the source that you want to
monitor.
7. Highlight an IP address that you have selected for monitoring and select a Target Adapter that
8. Highlight an IP address that you have selected for monitoring and select a Monitor Interval. This
setting identifies the number of seconds between the monitor requests sent from the target to the
source to determine if the source is online. Repeat this step for each IP address that is being
monitored.
9. Highlight an IP address that you have selected for monitoring and select the Missed Packets.
This setting is the number of monitor replies sent from the source to the target that can be missed
before assuming the source machine has failed. Repeat this step for each IP address that is being
monitored.
To achieve shorter delays before failover, use lower Monitor Interval and Missed
Packets values. This may be necessary for IP addresses on machines, such as a web
server or order processing database, which must remain available and responsive at all
times. Lower values should be used where redundant interfaces and high-speed, reliable
network links are available to prevent the false detection of failure. If the hardware does
not support reliable communications, lower values can lead to premature failover. To
achieve longer delays before failover, choose higher values. This may be necessary for IP
addresses on slower networks or on a server that is not transaction critical. For example,
failover would not be necessary in the case of a server restart.
10. Highlight the source name and specify the Items to Failover, which identifies which source
components you want to failover to the target.
l IP AddressesIf you want to failover the IP addresses on the source, enable this option
and then specify the addresses that you want to failover. When the source and target are
failed over.
l Include UnmonitoredAll of the IP address(es) will be failed over.
11. By default, Manual Intervention is enabled, allowing you to control when failover occurs. When
a failure occurs, a prompt appears in the Failover Control Center and waits for you to manually
initiate the failover process. Disable this option only if you want failover to occur immediately when
a failure occurs.
12. If you are using any failover or failback scripts, click Scripts and enter the path and filename for
each script type. Scripts may contain any valid Linux command, executable, or script file.
Examples of functions specified in scripts include stopping daemons on the target before failover
because they may not be necessary while the target is standing in for the source, stopping
daemons on the target that need to be restarted with the sources machine name and IP address,
starting daemons or loading applications that are in an idle, standby mode waiting for failover to
occur, notifying the administrator before and after failover or failback occurs, stopping daemons
on the target after failback because they are no longer needed, stopping daemons on the target
that need to be restarted with the target machines original name and IP address, and so
on.Specify each script that you want to run and the following options, if necessary.
13. If you want to delay the failover or failback processes until the associated script has completed,
mark the appropriate check box.
14. If you want the same scripts to be used as the default for future monitor sessions, mark the
appropriate check box.
15. Click OK to return to the Monitor Settings dialog box.
16. Click OK on the Monitor Settings dialog box to save your monitor settings and begin monitoring for
a failure.
nsupdate "dnsover"
5. Create a file called dnsover and add the following lines. This is the file called by your post-failover
script.
6. Create a failback script with the following command. Specify this script for post-failback.
nsupdate "dnsback"
7. Create a file called dnsback and add the following lines. This is the file called by your post-failback
script.
When failover and failback occur, the failover and failback scripts will automatically trigger DNSupdates.
You can minimize the Failover Control Center and, although it will not appear in your Windows
taskbar, it will still be active and the failover icon will still appear in the desktop icon tray.
The following table identifies how the visual indicators change when the source is online.
The following table identifies how the visual indicators change when the source fails and failover is
initiated.
The following table identifies how the visual indicators change when failover is complete.
If the Failover Control Center is not running at the time the failure occurs, the manual
intervention dialog box will appear the next time the Failover Control Center is started.
When a failure occurs, an alert is forwarded to the Linux system log. You can then start the
Failover Control Center and respond to the manual intervention prompt.
If SNMP is installed and configured, an SNMP trap is also generated. When using a third-party
SNMP manager, an e-mail or page can be generated to notify you of the failure.
Files that were open or being accessed at the time of failover will generate Stale NFS file handle
error messages. Remount the NFS export to correct this error.
Click Cancel to abort the failover process. If necessary, you can initiate failover later from the Failover
Control Center. Click OK to proceed with failover.
6. From your target, confirm the Replication Console is communicating with the source using the
new, unique IP address.
a. From the Replication Console on the target, right-click the source and select Remove.
b. Depending on your configuration, the source may be automatically inserted back into the
Replication Console. If it is not, select Insert, Server. Specify the source server by the new
IPaddress and click OK.
7. Begin your restoration process.
a. From the Replication Console, select Tools, Restoration Wizard.
b. Review the Welcome screen and click Next to continue.
At any time while using the Restoration Wizard, click Back to return to previous
screens and review your selections.
c. Select the target that contains the current copy of the data that you want to restore and click
Next.
d. Select the original source or Alternate, if your original source is not listed. This option
identifies to the target which data set you are trying to restore so that the appropriate
replication sets can be presented to you.
e. Click Next to continue.
the name of that replication set by selecting it from the pull-down menu. You will have
an opportunity to modify the replication set definition.
l Create a new replication set with this nameIf you choose to create a new
replication set, specify a replication set name.With this option, you will need to define
the data to be restored.
g. Click Next to continue.
h. A tree display appears identifying the data available for restoration. Mark the check box of
the volumes, directories, and/or files you want to restore. Keep in mind that if you exclude
volumes, folders, and/or files that were originally replicated, it may compromise the integrity
of your applications or data.
i. Click Next to continue
j. Select the new source server. This is the server where the data from the target will be
restored. This may be the original source server or a new server. Click Next to continue.
k. Select your network route to the new source, which includes the IP address and port
number. Also select the location on the new source for the restored data. If you want to set
a customized path, click in the field under Source Path to edit the location.
If you are using a database application, do not use the newer option
unless you know for certain you need it. With database applications, it
is critical that all files, not just some of them that might be newer, get
restored.
l Use alternate target files for executables that may be in useIf you have
executables that may be in use during the restoration, you can have Double-Take
create and update an alternate file during the restoration. Once the mirroring and
replication operations have been completed, the alternate file will be renamed to the
original file. This process will reduce the speed of your restoration, so it should only
be used if executables may be in use.
n. Review your selections and click Finish to begin the restoration.
8. Monitoring the restoration connection and after the Mirror Status is Idle, schedule a time for
failback. User downtime will begin once failback is started, so select a time that will have minimal
disruption on your users.
9. When you are ready, begin the failback process.
a. Stop user access to the target. The user downtime starts now.
b. In the Replication Console, watch the restoration connection until activity has ended and
replication is in a Ready state. This will happen as the final data in queue, if any, is applied
on the source. The replication Ready state indicates replication is waiting for new incoming
data changes.
c. Disconnect the restoration connection.
d. Open the Failover Control Center.
e. Select the original target that is currently standing in for the original failed source.
f. Highlight the failed source and click Failback. If you have a pre-failback script configured, it
will be started.
g. When failback is complete, the post-failback script, if configured, will be started. When the
script is complete, you will be prompted to determine if you want to continue failover
monitoring, do not select either option. Leave the prompt dialog box open as is.
10. Complete the following steps on your source to update the source to its original identity.
a. Stop Double-Take.
b. Modify the source identity back to the original source IP address.
c. Start Double-Take.
11. Confirm the Replication Console is communicating with the source using the original IP address.
a. Right-click the source and select Remove.
b. Depending on your configuration, the source may be automatically inserted back into the
Replication Console. If it is not, select Insert, Server. Specify the source server by the
original IPaddress and click OK.
12. At this time, you can go back to the dialog box in the Failover Control Center. Select Continue or
Stop to indicate if you want to continue monitoring the source. After you have selected whether or
not to continue monitoring the source, the source post-failback script, if configured, will be started.
At this time, you can start any applications and allow end-users to access the data.
The source must be online and Double-Take must be running to ensure that the source
post-failback script can be started. If the source has not completed its boot process, the
command to start the script may be lost and the script will not be initiated.
Restoring across a NAT router requires the ports to be the same as the original
connection. If the ports have been modified (manually or reinstalled), you must set the
port numbers to the same values as the last valid source/target connection.
overwriting them. Any files that do not exist on the source are written also. If this option is
disabled, only files that do not exist on the source will be restored.
l Only if backup copy's date is more recentThis option restores only those files that
are newer on the target than on the source. The entire file is overwritten with this option.
If you are using a database application, do not use the newer option unless you
know for certain you need it. With database applications, it is critical that all files, not
just some of them that might be newer, get mirrored.
l Use Checksum comparison to send minimal blocks of dataSpecify if you want the
restoration process to use a block checksum comparison to determine which blocks are
different. If this option is enabled, only those blocks (not the entire files) that are different will
be restored to the source.
17. If you want to configure orphan files, click the Orphans tab. The same orphan options are
available for a restoration connection as a standard connection.
18. Click Restore to begin the restoration.
After the restoration is complete, the restoration connection will automatically be disconnected and the
replication set deleted. At this time, you can start any applications and allow end-users to access the
data on the source.
4. Specify the server identity information. Some of the fields are informational only.
l NicknameA nickname is saved in the Replication Console workspace, therefore, it only
appears in the Replication Console on this server. It is not communicated across the
network. If you export a workspace and use it on another Double-Take server, the server
nickname will appear there also.
l MachineThis is the actual server name. This field is not modifiable.
l AddressesThe IP address(es) for this server are listed in this field. This information is
not modifiable and is displayed for your information. The machines primary address is
listed first.
server tree.
l Broadcast HeartbeatA Double-Take server is broadcasting Double-Take
heartbeats.
l Operating SystemThe servers operating system version is displayed.
The Replication Console Licensing tab uses older terminology, such as activation code
and node-locking. The activation code is actually your license key before it is activated.
Your node-locked code is the activation key that will activate your license.
feature that allows you to insert and run tasks at various points during the replication of
data. Because the tasks are user-defined, you can achieve a wide variety of goals with this
feature. For example, you might insert a task to create a snapshot or run a backup on the
target after a certain segment of data from the source has been applied on the target. This
allows you to coordinate a point-in-time backup with real-time replication.
based on date, time, and/or size is flagged as different. The mirror then performs a
checksum comparison on the flagged files and only sends those blocks that are
different.
l Differences with no ChecksumAny file that is different on the source and target
Database applications may update files without changing the date, time, or
file size. Therefore, if you are using database applications, you should use
the Differences with checksum or Full option.
Double-Take traffic will use. It can also be used on machines with multiple IP addresses on
a single NIC.
l Default ProtocolThe default protocol for all Double-Take communications is the
TCP/IP protocol. In the future, Double-Take may support other communication protocols.
l Service Listen PortDouble-Take servers use the Service Listen Port to send and
Transmit Port.
If you want to control network traffic, you may find the Double-Take bandwidth
limiting features to be a better method.
l FolderThis is the location where the disk queue will be stored. Double-Take Availability
displays the amount of free space on the volume selected. Any changes made to the queue
location will not take effect until the Double-Take daemon has been restarted on the server.
Select a location on a volume that will have minimal impact on the operating system and
applications being protected. For best results and reliability, this should be a dedicated,
non-boot volume. The disk queue should not be on the same physical or logical volume as
the data being replicated.
l Maximum system memory for queueThis is the amount of system memory, in MB,
that will be used to store data in queues. When exceeded, queuing to disk will be triggered.
This value is dependent on the amount of physical memory available but has a minimum of
32 MB. By default, 128 MB of memory is used. If you set it lower, Double-Take Availability
will use less system memory, but you will queue to disk sooner which may impact system
performance. If you set it higher, Double-Take Availability will maximize system
performance by not queuing to disk as soon, but the system may have to swap the memory
to disk if the system memory is not available.
Since the source is typically running a production application, it is important that the amount
of memory Double-Take Availability and the other applications use does not exceed the
amount of RAM in the system. If the applications are configured to use more memory than
there is RAM, the system will begin to swap pages of memory to disk and the system
performance will degrade. For example, by default an application may be configured to use
all of the available system memory when needed, and this may happen during high-load
operations. These high-load operations cause Double-Take Availability to need memory to
queue the data being changed by the application. In this case, you would need to configure
the applications so that they collectively do not exceed the amount of RAM on the server.
Perhaps on a server with 1 GB of RAM running the application and Double-Take
Availability, you might configure the application to use 512 MB and Double-Take Availability
to use 256 MB, leaving 256 MB for the operating system and other applications on the
system. Many server applications default to using all available system memory, so it is
important to check and configure applications appropriately, particularly on high-capacity
servers.
Any changes to the memory usage will not take effect until the Double-Take daemon has
been restarted on the server.
l Maximum disk space for queueThis is the maximum amount of disk space, in MB, in
the specified Folder that can be used for Double-Take Availability disk queuing, or you can
select Unlimited which will allow the queue usage to automatically expand whenever the
available disk space expands. When the disk space limit is reached, Double-Take
Availability will automatically begin the auto-disconnect process. By default, Double-Take
Availability will use an unlimited amount of disk space. Setting this value to zero (0) disables
disk queuing.
l Minimum Free SpaceThis is the minimum amount of disk space in the specified Folder
that must be available at all times. By default, 50 MB of disk space will always remain free.
The Minimum Free Space should be less than the amount of physical disk space minus
Maximum disk space for queue.
l Alert at following queue usage percentageThis is the percentage of the disk queue
that must be in use to trigger an alert message in the Double-Take Availability log. By
default, the alert will be generated when the queue reaches 50%.
5. Click OK to save the settings.
packets to mirror packets that are placed in the source queue. Specify a larger number if
you have a busy network that has heavy replication. Also, if you anticipate increased
network activity during a mirror, increase this number so that the replication queue does not
get too large.
l Replicate NT Security by NameThis is a Windows option only.
l Ignore Delete OperationsThis option allows you to keep files on the target machine
after they are deleted on the source. When a file is deleted on the source, that delete
operation is not sent to the target. (All edits to files on the source are still replicated to the
If delete operations are ignored long enough, the potential exists for the target to
run out of space. In that case, you can manually delete files from the target to free
space.
Database applications may update files without changing the date, time, or file size.
Therefore, if you are using database applications, you should use the Block
Checksum All option to ensure proper file comparisons.
ofsystem memory that can contain mirror data before the target signals the source to
pause the sending of mirror operations. The default setting is 20.
l Target Mirror Capacity Low PercentageYou can specify the minimum percentage of
system memory that can contain mirror data before the target signals the source to resume
the sending of mirror operations. The default setting is 10.
l Retry Delay for Incomplete Operations (seconds)This option specifies the amount
of time, in seconds, before retrying a failed operation on the target. The default setting is 3.
5. Click OK to save the settings.
4. Specify the database files that store the Double-Take replication set, connection, and scheduling
information.
l FolderSpecify the directory where each of the database files on this tab are stored. The
defaultlocation is the directory where the Double-Take program files are installed.
l Replication SetThis database file maintains which replication sets have been created
on the server along with their names, rules, and so on. The default file name is DblTake.db.
l ConnectionThis database file maintains the active source/target connection
4. Specify the location and file names for the log and statistics files.
l FolderSpecify the directory where each of the log files on this tab are stored. The default
location is the directory where the Double-Take program files are installed.
l Messages & Alerts
l Maximum LengthSpecify the maximum length of the client and daemon log files.
The default size is 1048576 bytes and is limited by the available hard drive space.
l Maximum FilesSpecify the maximum number of Double-Take alert log files that
l FilenameThe verification log is created during the verification process and details
which files were verified as well as the files that are synchronized. This field contains
process to the same log file. If this check box is not marked, each verification process
that is logged will overwrite the previous log file. By default, this check box is
selected.
l LanguageAt this time, English is the only language available.
l Statistics
queue or replication bytes sent. The default file name is statistic.sts. This file is a
binary file that is read by the DTStat utility.
l Maximum LengthSpecify the maximum length of the statistics log file. The default
maximum length is 10485760 bytes (10 MB). Once this maximum has been reached,
Double-Take begins overwriting the oldest data in the file.
l Write IntervalSpecify how often Double-Take writes to the statistics log file. The
Any specified notification settings are retained when Enable notification is disabled.
If you do not specify an SMTP server, Double-Take Availability will attempt to use
the Linux mail command. The success will depend on how the local mail system is
configured. Double-Take Availability will be able to reach any address that the mail
command can reach.
l Log on to SMTP ServerIf your SMTP server requires authentication, enable Log on
to SMTP Server and specify the Username and Password to be used for authentication.
Your SMTP server must support the LOGIN authentication method to use this feature. If
your server supports a different authentication method or does not support authentication,
you may need to add the Double-Take Availability server as an authorized host for relaying
e-mail messages. This option is not necessary if you are sending exclusively to e-mail
addresses that the SMTP server is responsible for.
l From AddressSpecify the e-mail address that you want to appear in the From field of
each Double-Take Availability e-mail message. The address is limited to 256 characters.
l Send ToSpecify the e-mail address that each Double-Take Availability e-mail message
should be sent to and click Add. The e-mail address will be inserted into the list of
addresses. Each address is limited to 256 characters. You can add up to 256 e-mail
addresses. If you want to remove an address from the list, highlight the address and click
Remove. You can also select multiple addresses to remove by Ctrl-clicking.
l Subject Prefix and Add event description to subjectThe subject of each e-mail
notification will be in the format Subject Prefix : Server Name : Message Severity : Message
ID : Message Description. The first and last components (Subject Prefix and Message
Description) are optional. The subject line is limited to 150 characters.
If desired, enter unique text for the Subject Prefix which will be inserted at the front of the
subject line for each Double-Take Availability e-mail message. This will help distinguish
Double-Take Availability messages from other messages. This field is optional.
If desired, enable Add event description to subject to have the description of the
message appended to the end of the subject line. This field is optional.
l Filter ContentsSpecify which messages that you want to be sent via e-mail. Specify
Information, Warning, and/or Error. You can also specify which messages to exclude
based on the message ID. Enter the message IDs as a comma or semicolon separated list.
You can indicate ranges within the list.
You can test e-mail notification by specifying the options on the E-mail Notification
tab and clicking Test. If desired, you can send the test message to a different e-mail
address by selecting Send To and entering a comma or semicolon separated list of
addresses. Modify the message text up to 1024 characters, if necessary. Click
Send to test the e-mail notification. The results will be displayed in a message box.
3. Specify your Username, Password, Domain, and whether you want your password saved.
4. Click OK and verify your access by the resulting icon and log on again if necessary.
When logging in, the user name, password, and domain are limited to 100 characters.
If your license key is missing or invalid, you will be prompted to open the Server Properties
General tab to add or correct the key. Select Yes to open the Server Properties dialog
box or select No to continue without adding a license key.
If the login does not complete within 30 seconds, it is automatically canceled. If this
timeout is not long enough for your environment, you can increase it by adjusting the
Communication Timeout on the Configuration tab of the Replication Console
properties. Select File, Options, from the Replication Console to access this screen.
Administrator rights
This icon is a computer with a gear and it indicates the Double-Take security is
set to administrator access.
Monitor rights
This icon is a computer with a magnifying glass and it indicates the Double-Take
security is set to monitor only access.
No rights
This icon is a lock and it indicates the Double-Take security is set to no access.
5. To log off of a Double-Take machine, right-click the machine name on the left pane of the
Replication Console and select Logout.
For installation and licensing instructions, see the Double-Take Installation, Licensing, and
Activation document.
Once your job is created and running, see the following sections to manage your job.
l Managing and controlling full server jobs on page 245You can view status information about
your job and learn how to control the job.
l Failing over full server jobs on page 261Use this section when a failover condition has been met
or whenever you want to failover.
l Reversing full server jobs on page 263Use this section to reverse protection. The source (what
was your original target hardware) is now sending data to the target (what was your original
source hardware).
l NotesOracle Enterprise Linux support is for the mainline kernel only, not the
Unbreakable kernel.
l Operating systemRed Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version6.4 through 6.6
l NotesOracle Enterprise Linux support includes the mainline kernel only for
version 6.3 and includes both the mainline kernel and the Unbreakable kernel for
versions 6.4 and 6.5.
l Operating systemSUSELinux Enterprise
l Version10.3 and 10.4
XenPAE
l Kernel type for x86-64 (64-bit) architecturesDefault, SMP, Xen
l Operating systemUbuntu
l Version10.04.3
l Kernel version2.6.32-33
l Kernel version2.6.32-38
l Operating systemUbuntu
l Version12.04.2
l Kernel version3.5.0-23
l Operating systemUbuntu
l Version12.04.3
l Kernel version3.8.0-29
For all operating systems except Ubuntu, the kernel version must match the
expected kernel for the specified release version. For example, if /etc/redhat-
release declares the system to be a Redhat 5.10 system, the kernel that is installed
must match that.
l Packages and servicesEach Linux server must have the following packages and services
installed before you can install and use Double-Take. See your operating system documentation
for details on these packages and utilities.
l sshd (or the package that installs sshd)
l lsb
l parted
l /usr/bin/which
l /usr/sbin/dmidecode
l /usr/bin/scp (only if you will be performing push installations from the Double-Take Console
occurs before the required reboot, the target may not operate properly or it may not boot.
l System memoryThe minimum system memory on each server should be 1 GB. The
recommended amount for each server is 2 GB.
l Disk space for program filesThis is the amount of disk space needed for the Double-Take
program files. This is approximately 285 MB on each Linux server.
Make sure you have additional disk space for Double-Take queuing, logging, and so on.
l Server nameDouble-Take includes Unicode file system support, but your server name
muststill be in ASCII format. If you have the need to use a server's fully-qualified domain name,
your server cannot start with a numeric character because that will be interpreted as an IP
address. Additionally, all Double-Take servers must have a unique server name.
l Protocols and networkingYour servers must meet the following protocol and networking
requirements.
l Your servers must have TCP/IP with static IP addressing.
l IPv4 is the only supported version.
l NATsupportFull server jobs can support NAT environments only when reverse protection is
disabled. Additionally, your NATenvironment must be an IP-forwarding configuration with one to
one port mappings. Port-forwarding is not supported. Make sure you have added your servers to
the Double-Take Console using the correct IP address.
l Name resolutionYour servers must have name resolution or DNS. The Double-Take Console
must be able to resolve the target, and the target must be able to resolve all source servers. For
details on name resolution options, see your Linux documentation or online Linux resources.
l PortsPort 1501 is used for localhost communication. Ports 1500, 1505, 1506, 6325, and 6326
are used for component communication and must be opened on any firewall that might be in use.
l SecurityDouble-Take security is granted through membership in user groups. The groups can
be local or LDAP (Lightweight Directory Access Protocol). A user must provide a valid local
account that is a member of the Double-Take security groups
l SELinux policySELinux should be disabled on the source.
l UEFIThe source boot mode cannot be UEFI (Unified Extensible Firmware Interface).
l Trusted Boot (tboot)Trusted Boot is not supported and should be disabled on the source and
target.
l SnapshotsDouble-Take snapshots are not supported with Linux full server jobs.
l Supported configurationsThe following table identifies the supported configurations for a
Linux full server job.
l Current ServersThis list contains the servers currently available in your console
session. Servers that are not licensed for the workflow you have selected will be filtered out
of the list. Select your source server from the list.
l Find a New ServerIf the server you need is not in the Current Servers list, click the
Find a New Server heading. From here, you can specify a server along with credentials
for logging in to the server. If necessary, you can click Browse to select a server from a
network drill-down list.
If you enter the source server's fully-qualified domain name, the Double-Take Console
will resolve the entry to the server short name. If that short name resides in two different
domains, this could result in name resolution issues. In this case, enter the IP address of
the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
7. By default, Double-Take selects the system and boot volumes for protection. You will be unable to
deselect these volumes. Select any other volumes on the source that you want to protect.
If desired, click the Replication Rules heading and expand the volumes under Folders. You will
see that Double-Take automatically excludes particular files that cannot be used during the
protection. If desired, you can exclude other files that you do not want to protect, but be careful
when excluding data. Excluded volumes, folders, and/or files may compromise the integrity of
your installed applications.
If you return to this page using the Back button in the job creation workflow, your
Workload Types selection will be rebuilt, potentially overwriting any manual replication
rules that you specified. If you do return to this page, confirm your Workload Types and
Replication Rules are set to your desired settings before proceeding forward again.
If you enter the target server's fully-qualified domain name, the Double-Take Console will
resolve the entry to the server short name. If that short name resides in two different
domains, this could result in name resolution issues. In this case, enter the IP address of
the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
For the Job name, specify a unique name for your job.
l Apply source network configuration to the targetIf you select this option, your
source IP addresses will failover to the target. If your target is on the same subnet as the
source (typical of a LAN environment), you should select this option. Do not select this
option if you are using a NAT environment that has a different subnet on the other side of
the NATrouter.
Do not apply the source network configuration to the target in a WAN environment
unless you have a VPN infrastructure so that the source and target can be on the
same subnet, in which case IP address failover will work the same as a LAN
configuration. If you do not have a VPN, you will have to reconfigure the routers by
moving the source's subnet from the source's physical network to the target's
physical network. There are a number of issues to consider when designing a
solution that requires router configuration to achieve IP address failover. Since the
route to the source's subnet will be changed at failover, the source server must be
the only system on that subnet, which in turn requires all server communications to
pass through a router. Additionally, it may take several minutes or even hours for
routing tables on other routers throughout the network to converge.
l Retain target network configurationIf you select this option, the target will retain all of
its original IP addresses. If your target is on a different subnet (typical of a WAN or
NATenvironment), you should select this option.
l Enable reverse protectionAfter failover, your target server is lost. Reverse protection
allows you to store a copy of the target's system state on the source server, so that the
target server will not be lost. The reverse process will bring the target identity back on the
source hardware and establish protection. After the reverse, the source (running on the
original target hardware) will be protected to the target (running on the original source
hardware).
In a LANenvironment, you may want to consider having two IPaddresses on each server.
This will allow you to monitor and failover one (or more) IPaddresses, while still leaving an
IPaddress that does not get failed over. This IP address that is not failed over is called a
reservedIPaddress and can be used for the reverse process. The reserved IP address
remains with the server hardware. Ideally, the reserved IPaddress should not be used for
production communications. The reservedIP address can be on the same or a different
subnet from your production IPaddresses, however if the subnet is different, it should be on
a different network adapter. The reserved IP addresses will also be used to route Double-
Take data.
You do not have to have a second IP address on each server. It is acceptable to use the
production IPaddress for reverse protection. In this case, Double-Take will block the DNS
record for that address while it is failed over.
l Select a reserved IPaddress on the sourceSpecify an IP address on the
source which will be used to permanently identify the source server. The IP address
you specify will not be failed over to the target in the event of a failure. This allows you
to reverse protection back to the source after a failover.
l Select a reserved IPaddress on the targetSpecify an IP address on the target
which will be used to permanently identify the target server. The IP address you
specify will not be lost during failover. This allows you to reverse protection back to
the source after a failover.
When reverse protection is enabled, your source server must have space to store,
process, and apply the target's system state data.
When the job is first started and reverse protection is enabled, an image of the
target's system state is mirrored to the source server. This mirror may cause a
performance impact on your source server. This impact is only temporary, and
l Disable reverse protectionIf you do not use reverse protection, after a failover, your
target server will be lost. In order to continue protecting your data, you will have to manually
rebuild your original source and restart protection, which can be a long and complicated
process. Also, if you disable reverse, you will lose the activated target license after failover.
This is your only supported option if you are using a NATenvironment.
l Send data to the target server using this routeSpecify an IP address on the
target to route Double-Take data. This allows you to select a different route for
Double-Take traffic. For example, you can separate regular network traffic and
Double-Take traffic on a machine with multiple IP addresses. You can also select or
manually enter a public IP address (which is the public address of the NATrouter) if
you are using a NAT environment.
If you change the IPaddress on the target which is used for the target route, you
will be unable to edit the job. If you need to make any modifications to the job, it will
have to be deleted and re-created.
For Map source network adapters to target network adapters, specify how you want the IP
addresses associated with each NIC on the source to be mapped to a NIC on the target. Do not
mix public and private networks. Also, if you have enabled reverse protection, make sure that your
NICs with your reserved IP addresses are mapped to each other.
l Mirror OptionsChoose a comparison method and whether to mirror the entire file or
only the bytes that differ in each file.
lDo not compare files. Send the entire file.Double-Take will not perform any
comparisons between the files on the source and target. All files will be mirrored to
the target, sending the entire file. This is equivalent to selecting the mirror all files
option prior to Double-Take version 7.1.
l Compare file attributes and data. Send the attributes and bytes that differ.
Double-Take will compare file attributes and the file data and will mirror only the
attributes and bytes that are different. This is equivalent to selecting the mirror
different files and use block checksum options prior to Double-Take version 7.1. If
you are using a database application on your source, select this option.
l General OptionsChoose your general mirroring options.
l Delete orphaned filesAn orphaned file is a file that exists in the replica data on
the target, but does not exist in the protected data on the source. This option
specifies if orphaned files should be deleted on the target.
Orphaned file configuration is a per target configuration. All jobs to the same
target will have the same orphaned file configuration.
If delete orphaned files is enabled, carefully review any replication rules that
use wildcard definitions. If you have specified wildcards to be excluded from
protection, files matching those wildcards will also be excluded from
orphaned file processing and will not be deleted from the target. However, if
you have specified wildcards to be included in your protection, those files that
fall outside the wildcard inclusion rule will be considered orphaned files and
will be deleted from the target.
To help reduce the amount of bandwidth needed to transmit Double-Take data, compression
allows you to compress data prior to transmitting it across the network. In a WAN environment this
provides optimal use of your network resources. If compression is enabled, the data is
compressed before it is transmitted from the source. When the target receives the compressed
data, it decompresses it and then writes it to disk. You can set the level from Minimum to
Maximum to suit your needs.
Keep in mind that the process of compressing data impacts processor usage on the source. If you
notice an impact on performance while compression is enabled in your environment, either adjust
to a lower level of compression, or leave compression disabled. Use the following guidelines to
determine whether you should enable compression.
l If data is being queued on the source at any time, consider enabling compression.
l If the server CPU utilization is averaging over 85%, be cautious about enabling
compression.
l The higher the level of compression, the higher the CPU utilization will be.
l Do not enable compression if most of the data is inherently compressed. Many image (.jpg,
.gif) and media (.wmv, .mp3, .mpg) files, for example, are already compressed. Some
images files, such as .bmp and .tif, are decompressed, so enabling compression would be
beneficial for those types.
l Compression may improve performance even in high-bandwidth environments.
l Do not enable compression in conjunction with a WAN Accelerator. Use one or the other to
compress Double-Take data.
All jobs from a single source connected to the same IP address on a target will share the
same compression configuration.
Bandwidth limitations are available to restrict the amount of network bandwidth used for Double-
Take data transmissions. When a bandwidth limit is specified, Double-Take never exceeds that
allotted amount. The bandwidth not in use by Double-Take is available for all other network traffic.
All jobs from a single source connected to the same IP address on a target will share the
same bandwidth configuration.
a Preset bandwidth limit rate from the common bandwidth limit values. The Bandwidth
field will automatically update to the bytes per second value for your selected bandwidth.
This is the maximum amount of data that will be transmitted per second. If desired, modify
the bandwidth using a bytes per second value. The minimum limit should be 3500 bytes per
second.
12. Click Next to continue.
13. Double-Take validates that your source and target are compatible. The Summary page displays
your options and validation items.
Errors are designated by a white X inside a red circle. Warnings are designated by a black
exclamation point (!) inside a yellow triangle. A successful validation is designated by a white
checkmark inside a green circle. You can sort the list by the icon to see errors, warnings, or
successful validations together. Click on any of the validation items to see details. You must
correct any errors before you can continue. Depending on the error, you may be able to click Fix
or Fix All and let Double-Take correct the problem for you. For those errors that Double-Take
cannot correct automatically, you will need to modify the source or target to correct the error, or
you can select a different target. You must revalidate the selected servers, by clicking Recheck,
until the validation check passes without errors.
After a job is created, the results of the validation checks are logged to the job log. See the
Double-Take Reference Guide for details on the various Double-Take log files.
14. Once your servers have passed validation and you are ready to establish protection, click Finish,
and you will automatically be taken to the Manage Jobs page.
Column 1 (Blank)
The first blank column indicates the state of the job.
The job is in a warning state. This icon is also displayed on any server groups that
you have created that contain a job in a warning state.
The job is in an error state. This icon is also displayed on any server groups that you
have created that contain a job in an error state.
Name
The name of the job
Target data state
l OKThe data on the target is in a good state.
l MirroringThe target is in the middle of a mirror process. The data will not be in a
good state until the mirror is complete.
l Mirror RequiredThe data on the target is not in a good state because a remirror
is required. This may be caused by an incomplete or stopped mirror or an operation
may have been dropped on the target.
l BusyThe source is low on memory causing a delay in getting the state of the data
on the target.
l Not LoadedDouble-Take target functionality is not loaded on the target server.
This may be caused by a license key error.
l Not ReadyThe Linux drivers have not yet completed loading on the target.
l UnknownThe console cannot determine the status.
Mirror remaining
The total number of mirror bytes that are remaining to be sent from the source to the
target
Mirror skipped
The total number of bytes that have been skipped when performing a difference. These
bytes are skipped because the data is not different on the source and target.
Replication queue
The total number of replication bytes in the source queue
Disk queue
The amount of disk space being used to queue data on the source
Bytes sent
The total number of mirror and replication bytes that have been transmitted to the
target
Bytes sent (compressed)
The total number of compressed mirror and replication bytes that have been
transmitted to the target. If compression is disabled, this statistic will be the same as
Delete
Stops (if running) and deletes the selected jobs.
Provide Credentials
Changes the login credentials that the job (which is on the target machine) uses to
authenticate to the servers in the job. This button opens the Provide Credentials dialog
box where you can specify the new account information and which servers you want to
update. See Providing server credentials on page 37. You will remain on the Manage
Jobs page after updating the server credentials. If your servers use the same
credentials, make sure you also update the credentials on the Manage Servers page
so that the Double-Take Console can authenticate to the servers in the console
session. See Managing servers on page 29.
Start
Starts or resumes the selected jobs.
If you have previously stopped protection, the job will restart mirroring and replication.
If you have previously paused protection, the job will continue mirroring and replication
from where it left off, as long as the Double-Take queue was not exhausted during the
Pause
Pauses the selected jobs. Data will be queued on the source while the job is paused.
All jobs from the same source to the sameIPaddress on the target will be paused.
Stop
Stops the selected jobs. The jobs remain available in the console, but there will be no
mirroring or replication data transmitted from the source to the target. Mirroring and
replication data will not be queued on the source while the job is stopped, requiring a
remirror when the job is restarted. The type of remirror will depend on your job settings.
Take Snapshot
Snapshots are not applicable to full server for Linux jobs.
Manage Snapshots
Snapshots are not applicable to full server for Linux jobs.
Failback
Starts the failback process. Failback does not apply to full server for Linux jobs.
Restore
Starts the restoration process. Restoration does not apply to full server for Linux jobs.
Reverse
Reverses protection. The original source hardware will be reversed to the target
identity and the job will start mirroring in the reverse direction with the job name and log
file names changing accordingly. After the mirror is complete, the job will continue
running in the opposite direction. See Reversing full server jobs on page 263 for the
process and details of reversing a full server job.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
Job name
The name of the job
Job type
Each job type has a unique job type name. This job is a Full Server Failover for Linux
job. For a complete list of all job type names, press F1 to view the Double-Take
Console online help.
Health
4. If you want to modify the workload items or replication rules for the job, click Edit workload or
replication rules. Modify the Workload item you are protecting, if desired. Additionally, you can
modify the specific Replication Rules for your job.
Volumes and folders with a green highlight are included completely. Volumes and folders
highlighted in light yellow are included partially, with individual files or folders included. If there is
no highlight, no part of the volume or folder is included. To modify the items selected, highlight a
volume, folder, or file and click Add Rule. Specify if you want to Include or Exclude the item.
Also, specify if you want the rule to be recursive, which indicates the rule should automatically be
applied to the subdirectories of the specified path. If you do not select Recursive, the rule will not
be applied to subdirectories.
Click OKto return to the Edit Job Properties page.
If you remove data from your workload and that data has already been sent to the target,
you will need to manually remove that data from the target. Because the data you
removed is no longer included in the replication rules, Double-Take orphan file detection
cannot remove the data for you. Therefore, you have to remove it manually.
The following table identifies the controls and the table columns in the Job logs window.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Job logs window. The messages are still logged to their respective files on
the server.
Copy
This button copies the messages selected in the Job logs window to the Windows
clipboard.
Clear
This button clears the Job logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
Resolve any maintenance updates on the source that may require the server to be rebooted
before failover or failback. Also, do not failover or failback if the target is waiting on a reboot after
applying maintenance. If failover occurs before the required reboot, the target may not operate
properly or it may not boot.
1. On the Manage Jobs page, highlight the job that you want to failover and click Failover,
Cutover, or Recover in the toolbar.
2. Select the type of failover to perform.
l Failover to live dataSelect this option to initiate a full, live failover using the current data
on the target. The source is automatically shut down if it is still running. Then the target will
stand in for the source by rebooting and applying the source identity, including its system
state, on the target. After the reboot, the target becomes the source, and the target no
longer exists.
l Perform test failoverThis option should only be used if your target is a virtual server. It
is like live failover, except the source is not shutdown. Therefore you should isolate the
virtual server from the network before beginning the test using the following procedure.
a. Stop the job.
b. Take a snapshot of the target virtual server using your hypervisor console.
c. Attach the target virtual server to a null virtual switch or one that does not have
access to your network infrastructure.
d. Perform the test failover and complete any testing on the virtual server.
e. After your testing is complete, revert to the snapshot of the target virtual server from
before the test started.
f. Reconnect the target virtual server to the proper virtual switch.
g. Restart the job.
If your target is a physical server, contact technical support if you want to test failover,
because you will have to rebuild your target system volume after the test.
l Failover to a snapshotThis option is not available for full server jobs.
3. Select how you want to handle the data in the target queue.
l Apply data in target queues before failover or cutoverAll of the data in the target
queue will be applied before failover begins. The advantage to this option is that all of the
data that the target has received will be applied before failover begins. The disadvantage to
this option is depending on the amount of data in queue, the amount of time to apply all of
the data could be lengthy.
l Discard data in the target queues and failover or cutover immediatelyAll of the
data in the target queue will be discarded and failover will begin immediately. The
advantage to this option is that failover will occur immediately. The disadvantage is that any
If you need to update DNS after failover, there is a sample DNSupdate script located in
/etc/DT/sysprep.d. You may need to modify the script for your environment. If you need basic
assistance with script modifications, contact technical support. Assistance with advanced
scripting will be referred to Professional Services.
If you did not enable reverse protection or if you have to rebuild your source, you will have to
reverse your protection manually.
1. Fix the issue that caused your original source server to fail.
2. Connect the original source server to the network.
3. Make sure the production NIC on your original source is online. If the NICis disabled or
unplugged, you will not be able to reverse. Make sure you continue to access the servers through
the reserved IP addresses, but you can disregard any IP address conflicts for the primary NIC.
Since the new source (running on the original target hardware) already has the source's address
assigned to it, the source reserved IP address (set during the job creation workflow) will be used
to identify the source. The machine names for both servers will be the same at this point. The
reserved IPaddresses which were selected during the job creation will be shown in parenthesis to
identify the machines.
4. On the Manage Jobs page, highlight the job that you want to reverse. If the job is not listed, you
may need to add your servers to your console again. Use the reserved IP addresses and local
credentials.
5. Highlight the job you want to reverse and click Reverse in the toolbar. During the reverse
process, you will see various states for the job. The Reversing state will be displayed when the
target identity is being established on the original source hardware. When the reverse process is
complete, the target (on the original source hardware) will reboot. At this point, your source is still
running on your original target hardware with the source name, but the original source hardware
now has the target identity. After reboot, the job will start synchronizing. During the synchronizing
process, protection is being established from the source (on the original target hardware) to the
target (on the original source hardware). The reverse protection is also established in the opposite
direction.
6. To go back to your original hardware, highlight the job and click Failover, Cutover, or Recover.
The source identity will now be applied to the target (on the original source hardware), and the
target identity will again be gone. Both servers will have the source identity.
7. To bring back the target identity, highlight the job and click Reverse. The same process as above
will be repeated, but on the opposite servers. When the reverse is complete, you will be back to
your original identities on the original hardware.
For installation and licensing instructions, see the Double-Take Installation, Licensing, and
Activation document.
Once your job is created and running, see the following sections to manage your job.
l Managing and controlling full server to ESX appliance jobs on page 287You can view status
information about your job and learn how to control the job.
l Failing over full server to ESX appliance jobs on page 302Use this section when a failover
condition has been met or whenever you want to failover.
l NotesOracle Enterprise Linux support is for the mainline kernel only, not the
Unbreakable kernel.
l Operating systemRed Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version6.4 through 6.6
l NotesOracle Enterprise Linux support includes the mainline kernel only for
version 6.3 and includes both the mainline kernel and the Unbreakable kernel for
versions 6.4 and 6.5.
l Operating systemSUSELinux Enterprise
l Version10.3 and 10.4
XenPAE
l Kernel type for x86-64 (64-bit) architecturesDefault, SMP, Xen
l Operating systemUbuntu
l Version10.04.3
l Kernel version2.6.32-33
l Operating systemUbuntu
l Version10.04.4
l Kernel version2.6.32-38
l Operating systemUbuntu
l Version12.04.2
l Kernel version3.5.0-23
l Operating systemUbuntu
l Version12.04.3
l Kernel version3.8.0-29
For all operating systems except Ubuntu, the kernel version must match the
expected kernel for the specified release version. For example, if /etc/redhat-
release declares the system to be a Redhat 5.10 system, the kernel that is installed
must match that.
l Packages and servicesEach Linux server must have the following packages and services
installed before you can install and use Double-Take. See your operating system documentation
for details on these packages and utilities.
l sshd (or the package that installs sshd)
l lsb
l parted
l /usr/bin/which
l /usr/sbin/dmidecode
l /usr/bin/scp (only if you will be performing push installations from the Double-Take Console
l The appliance is pre-configured for optimal performance. You do not need to modify the
l A single virtual recovery appliance can protect a maximum of 59 volume groups and raw
File
l Host, Local OperationsCreate Virtual Machine, Delete Virtual Machine, and
l Scheduled TaskCreate Tasks, Modify Task, Remove Task, and Run Task
l Virtual Machine, ConfigurationAdd existing disk, Add new disk, Add or remove
l vMotionHost vMotion is only supported if you are using vCenter. Storage vMotion is not
supported.
l System memoryThe minimum system memory on each server should be 1 GB. The
recommended amount for each server is 2 GB.
l Disk space for program filesThis is the amount of disk space needed for the Double-Take
program files. This is approximately 285 MB on a Linux source server. The appliance needs
approximately 620 MB.
Make sure you have additional disk space for Double-Take queuing, logging, and so on.
l Server nameDouble-Take includes Unicode file system support, but your server name
Server to
Not
Host Description Supported
Supported
Configuration
l Current ServersThis list contains the servers currently available in your console
session. Servers that are not licensed for the workflow you have selected will be filtered out
of the list. Select your source server from the list.
l Find a New ServerIf the server you need is not in the Current Servers list, click the
Find a New Server heading. From here, you can specify a server along with credentials
for logging in to the server. If necessary, you can click Browse to select a server from a
network drill-down list.
If you enter the source server's fully-qualified domain name, the Double-Take Console
will resolve the entry to the server short name. If that short name resides in two different
domains, this could result in name resolution issues. In this case, enter the IP address of
the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
5. Choose the type of workload that you want to protect. Under Server Workloads, in the
6. By default, Double-Take selects the system and boot volumes for protection. You will be unable to
deselect these volumes. Select any other volumes on the source that you want to protect.
The swap partition is excluded by default and you cannot select it, however, it will be
created on the replica
If desired, click the Replication Rules heading and expand the volumes under Folders. You will
see that Double-Take automatically excludes particular files that cannot be used during the
protection. If desired, you can exclude other files that you do not want to protect, but be careful
when excluding data. Excluded volumes, folders, and/or files may compromise the integrity of
your installed applications.
If you return to this page using the Back button in the job creation workflow, your
Workload Types selection will be rebuilt, potentially overwriting any manual replication
rules that you specified. If you do return to this page, confirm your Workload Types and
Replication Rules are set to your desired settings before proceeding forward again.
If you enter the target server's fully-qualified domain name, the Double-Take Console will
resolve the entry to the server short name. If that short name resides in two different
domains, this could result in name resolution issues. In this case, enter the IP address of
the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
For the Job name, specify a unique name for your job.
Select one of the volumes from the list to indicate the volume on the target where you want to
store the configuration files for the new virtual server when it is created. The target volume must
have enough Free Space. You can select the location of the .vmdk files under Replica Virtual
Machine Volumes.
l Replica virtual machine display nameSpecify the name of the replica virtual machine.
This will be the display name of the virtual machine on the host system.
l Number of processorsSpecify how many processors to create on the new virtual
machine. The number of processors on the source is displayed to guide you in making an
appropriate selection. If you select fewer processors than the source, your clients may be
impacted by slower responses.
l Amount of memorySpecify the amount of memory, in MB, to create on the new virtual
machine. The memory on the source is displayed to guide you in making an appropriate
selection. If you select less memory than the source, your clients may be impacted by
slower responses.
l Map source virtual switches to target virtual switchesIdentify how you want to
handle the network mapping after failover. The Source Network Adapter column lists the
NICs from the source. Map each one to a Target Network Adapter, which is a virtual
network on the target.
If your source has volume groups, you will see them listed in the Volume list. Highlight a volume
group and set the available Volume Group Properties that are displayed to the right of the
Volume list. The fields displayed in the Volume Group Properties will depend on your selection
for Virtual disk.
l Virtual DiskSpecify if you want Double-Take to create a new disk for your replica virtual
machine or if you want to use an existing disk.
Reusing a virtual disk can be useful for pre-staging data on a LAN and then relocating the
virtual disk to a remote site after the initial mirror is complete. You save time by skipping the
virtual disk creation steps and performing a difference mirror instead of a full mirror. With
pre-staging, less data will need to be sent across the wire initially. In order to use an
existing virtual disk, it must be a valid virtual disk, it cannot be attached to any other virtual
machine, and it cannot have any associated snapshots.
Each pre-existing disk must be located on the target datastore specified. If you have copied
the .vmdk file to this location manually, be sure you have also copied the associated -
flat.vmdk file too. If you have used vCenter to copy the virtual machine, the associated file
will automatically be copied. There are no restrictions on the file name of the .vmdk, but the
associated -flat.vmdk file must have the same base name and the reference to that flat file
in the .vmdk must be correct. Double-Take will move, not copy, the virtual disk files to the
appropriate folders created by the replica, so make sure the selected target datastore is
where you want the replica virtual disk to be located.
In a WAN environment, you may want to take advantage of using an existing disk by using a
process similar to the following.
a. Create a job in a LAN environment, letting Double-Take create the virtual disk for
you.
b. Complete the mirror process locally.
c. Delete the job and when prompted, do not delete the replica.
d. Move the virtual disk files to the desired target datastore. Do not forget to move the
associated -flat.vmdk file if you move the files manually.
e. Create a new protection job for the same source and reuse your existing disk.
If your source has multiple partitions inside a single .vmdk, you can only use an
existing virtual disk that Double-Take created. You can only use an existing virtual
disk created outside of Double-Take if there is one partition in each pre-existing
disk.
l DatastoreSpecify the datastore where you want to store the .vmdk files for the volume
group. You can specify the location of the virtual machine configuration files on the previous
Choose Volumes to Protect page.
l Replica disk formatIf you are creating a new disk, specify the format of the disk that will
be created.
l Flat DiskThis disk format allocates the full amount of the disk space immediately,
but does not initialize the disk space to zero until it is needed. This disk format is only
available on ESX 5; if you select this disk type on ESX 4, a thick disk will be created.
l ThickThis disk format allocates the full amount of the disk space immediately,
l Physical volume maximum sizeIf you have creating a new disk, specify the maximum
size, in MBor GB, of the virtual disks used to create the volume group. The default value is
equal to the maximum size that can be attached to the datastore you selected. That will
depend on your ESX version, your file system version, and the block size of your datastore.
l Volume Group sizeIf you are creating a new disk, specify the maximum size, in MBor
GB, of the volume group. The default value will match the source. This value cannot be less
than the logical volumes total size that you are trying to create on the volume group.
l Pre-existing virtual disks pathIf you are using an existing virtual disk, specify the
location of the existing virtual disks that you want to reuse.
If you are using an existing virtual disk, you will not be able to modify the logical volume
properties.
The size and space displayed may not match the output of the Linux df command. This is
because df shows the size of the mounted file system not the underlying partition which
may be larger. Additionally, Double-Take uses powers of 1024 when computing GB, MB,
and so on. The df command typically uses powers of 1000 and rounds up to the nearest
whole value.
In some cases, the replica virtual machine may use more virtual disk space than the
size of the source volume due to differences in how the virtual disk's block size is
formatted and how hard links are handled. To avoid this issue, specify the size of
your replica to be at least 5 GB larger.
The size and space displayed may not match the output of the Linux df command. This is
because df shows the size of the mounted file system not the underlying partition which
may be larger. Additionally, Double-Take uses powers of 1024 when computing GB, MB,
and so on. The df command typically uses powers of 1000 and rounds up to the nearest
whole value.
l Virtual DiskSpecify if you want Double-Take to create a new disk for your replica virtual
machine or if you want to use an existing disk. Review the details above under Volume
Group Properties Virtual Disk for information on using an existing disk.
l Disk sizeThis field displays the size of the partition on the source.
l Used spaceThis field displays the amount of disk space in use on the source partition.
l DatastoreSpecify the datastore where you want to store the .vmdk files for the partition.
You can specify the location of the virtual machine configuration files on the previous
Choose Volumes to Protect page.
l Replica disk formatSpecify the format of the disk that will be created.
l Flat DiskThis disk format allocates the full amount of the disk space immediately,
but does not initialize the disk space to zero until it is needed. This disk format is only
available on ESX 5; if you select this disk type on ESX 4, a thick disk will be created.
l ThickThis disk format allocates the full amount of the disk space immediately,
l Replica volume sizeSpecify the size, in MBor GB, of the replica partition on the target.
The value must be at least the size of the specified Used space on that partition.
l Pre-existing disks pathIf you are using an existing virtual disk, specify the location of
the existing virtual disks that you want to reuse.
Updates made during failover will be based on the network adapter name when
protection is established. If you change that name, you will need to delete the job
and re-create it so the new name will be used during failover.
If you update one of the advanced settings (IP address, gateway, or DNS server),
then you must update all of them. Otherwise, the remaining items will be left blank.
If you do not specify any of the advanced settings, the replica virtual machine will be
assigned the same network configuration as the source.
By default, the source IP address will be included in the target IP address list as the
default address. If you do not want the source IP address to be the default address
on the target after failover, remove that address from the Replica IPaddresses
list.
Linux operating systems only support one gateway, so the first gateway listed will
be used.
l Mirror OptionsChoose a comparison method and whether to mirror the entire file or
only the bytes that differ in each file.
lDo not compare files. Send the entire file.Double-Take will not perform any
comparisons between the files on the source and target. All files will be mirrored to
the target, sending the entire file. This is equivalent to selecting the mirror all files
option prior to Double-Take version 7.1.
l Compare file attributes and data. Send the attributes and bytes that differ.
Double-Take will compare file attributes and the file data and will mirror only the
attributes and bytes that are different. This is equivalent to selecting the mirror
different files and use block checksum options prior to Double-Take version 7.1. If
you are using a database application on your source, select this option.
l General OptionsChoose your general mirroring options.
l Delete orphaned filesAn orphaned file is a file that exists in the replica data on
the target, but does not exist in the protected data on the source. This option
specifies if orphaned files should be deleted on the target.
Orphaned file configuration is a per target configuration. All jobs to the same
target will have the same orphaned file configuration.
If delete orphaned files is enabled, carefully review any replication rules that
use wildcard definitions. If you have specified wildcards to be excluded from
protection, files matching those wildcards will also be excluded from
orphaned file processing and will not be deleted from the target. However, if
you have specified wildcards to be included in your protection, those files that
fall outside the wildcard inclusion rule will be considered orphaned files and
will be deleted from the target.
By default, Double-Take will select a target route for transmissions. If desired, specify an alternate
route on the target that the data will be transmitted through. This allows you to select a different
route for Double-Take traffic. For example, you can separate regular network traffic and Double-
Take traffic on a machine with multiple IP addresses. You can also select or manually enter a
public IP address (which is the public IP address of the server's NATrouter) if you are using a
NAT environment.
To help reduce the amount of bandwidth needed to transmit Double-Take data, compression
allows you to compress data prior to transmitting it across the network. In a WAN environment this
provides optimal use of your network resources. If compression is enabled, the data is
compressed before it is transmitted from the source. When the target receives the compressed
data, it decompresses it and then writes it to disk. You can set the level from Minimum to
Maximum to suit your needs.
Keep in mind that the process of compressing data impacts processor usage on the source. If you
notice an impact on performance while compression is enabled in your environment, either adjust
to a lower level of compression, or leave compression disabled. Use the following guidelines to
determine whether you should enable compression.
l If data is being queued on the source at any time, consider enabling compression.
l If the server CPU utilization is averaging over 85%, be cautious about enabling
compression.
l The higher the level of compression, the higher the CPU utilization will be.
l Do not enable compression if most of the data is inherently compressed. Many image (.jpg,
.gif) and media (.wmv, .mp3, .mpg) files, for example, are already compressed. Some
images files, such as .bmp and .tif, are decompressed, so enabling compression would be
beneficial for those types.
l Compression may improve performance even in high-bandwidth environments.
l Do not enable compression in conjunction with a WAN Accelerator. Use one or the other to
compress Double-Take data.
All jobs from a single source connected to the same IP address on a target will share the
same compression configuration.
Bandwidth limitations are available to restrict the amount of network bandwidth used for Double-
Take data transmissions. When a bandwidth limit is specified, Double-Take never exceeds that
allotted amount. The bandwidth not in use by Double-Take is available for all other network traffic.
All jobs from a single source connected to the same IP address on a target will share the
same bandwidth configuration.
a Preset bandwidth limit rate from the common bandwidth limit values. The Bandwidth
field will automatically update to the bytes per second value for your selected bandwidth.
This is the maximum amount of data that will be transmitted per second. If desired, modify
the bandwidth using a bytes per second value. The minimum limit should be 3500 bytes per
second.
13. Click Next to continue.
14. Double-Take validates that your source and target are compatible. The Summary page displays
your options and validation items.
Errors are designated by a white X inside a red circle. Warnings are designated by a black
exclamation point (!) inside a yellow triangle. A successful validation is designated by a white
checkmark inside a green circle. You can sort the list by the icon to see errors, warnings, or
successful validations together. Click on any of the validation items to see details. You must
correct any errors before you can continue. Depending on the error, you may be able to click Fix
or Fix All and let Double-Take correct the problem for you. For those errors that Double-Take
cannot correct automatically, you will need to modify the source or target to correct the error, or
you can select a different target. You must revalidate the selected servers, by clicking Recheck,
until the validation check passes without errors.
After a job is created, the results of the validation checks are logged to the job log. See the
Double-Take Reference Guide for details on the various Double-Take log files.
15. Once your servers have passed validation and you are ready to establish protection, click Finish,
and you will automatically be taken to the Manage Jobs page.
Column 1 (Blank)
The first blank column indicates the state of the job.
The job is in a warning state. This icon is also displayed on any server groups that
you have created that contain a job in a warning state.
The job is in an error state. This icon is also displayed on any server groups that you
have created that contain a job in an error state.
Name
The name of the job
Target data state
l OKThe data on the target is in a good state.
l MirroringThe target is in the middle of a mirror process. The data will not be in a
good state until the mirror is complete.
l Mirror RequiredThe data on the target is not in a good state because a remirror
is required. This may be caused by an incomplete or stopped mirror or an operation
may have been dropped on the target.
l BusyThe source is low on memory causing a delay in getting the state of the data
on the target.
l Not LoadedDouble-Take target functionality is not loaded on the target server.
This may be caused by a license key error.
l Not ReadyThe Linux drivers have not yet completed loading on the target.
l UnknownThe console cannot determine the status.
Mirror remaining
The total number of mirror bytes that are remaining to be sent from the source to the
target
Mirror skipped
The total number of bytes that have been skipped when performing a difference. These
bytes are skipped because the data is not different on the source and target.
Replication queue
The total number of replication bytes in the source queue
Disk queue
The amount of disk space being used to queue data on the source
Bytes sent
The total number of mirror and replication bytes that have been transmitted to the
target
Bytes sent (compressed)
The total number of compressed mirror and replication bytes that have been
transmitted to the target. If compression is disabled, this statistic will be the same as
Delete
Stops (if running) and deletes the selected jobs.
If you no longer want to protect the source and no longer need the replica of the source
on the target, select to delete the associated replica virtual machine. Selecting this
option will remove the job and completely delete the replica virtual machine on the
target.
If you no longer want to mirror and replicate data from the source to the target but still
want to keep the replica of the source on the target, select to keep the associated
replica virtual machine.
Provide Credentials
Changes the login credentials that the job (which is on the target machine) uses to
authenticate to the servers in the job. This button opens the Provide Credentials dialog
box where you can specify the new account information and which servers you want to
update. See Providing server credentials on page 37. You will remain on the Manage
Jobs page after updating the server credentials. If your servers use the same
credentials, make sure you also update the credentials on the Manage Servers page
so that the Double-Take Console can authenticate to the servers in the console
session. See Managing servers on page 29.
Pause
Pauses the selected jobs. Data will be queued on the source while the job is paused.
All jobs from the same source to the sameIPaddress on the target will be paused.
Stop
Stops the selected jobs. The jobs remain available in the console, but there will be no
mirroring or replication data transmitted from the source to the target. Mirroring and
replication data will not be queued on the source while the job is stopped, requiring a
remirror when the job is restarted. The type of remirror will depend on your job settings.
Take Snapshot
Snapshots are not applicable to full server to ESX appliance jobs.
Manage Snapshots
Snapshots are not applicable to full server to ESX appliance jobs.
Failback
Starts the failback process. Failback does not apply to full server to ESX appliance
jobs.
Restore
Starts the restoration process. Restore does not apply to full server to ESX appliance
jobs.
Recover
Recovers the selected DR job. Recovery does not apply to full server to ESXappliance
jobs.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
Job name
The name of the job
Job type
Each job type has a unique job type name. This job is a Full Server to ESX Appliance
job. For a complete list of all job type names, press F1 to view the Double-Take
Console online help.
Health
4. If you want to modify the workload items or replication rules for the job, click Edit workload or
replication rules. Modify the Workload item you are protecting, if desired. Additionally, you can
modify the specific Replication Rules for your job.
Volumes and folders with a green highlight are included completely. Volumes and folders
highlighted in light yellow are included partially, with individual files or folders included. If there is
no highlight, no part of the volume or folder is included. To modify the items selected, highlight a
volume, folder, or file and click Add Rule. Specify if you want to Include or Exclude the item.
Also, specify if you want the rule to be recursive, which indicates the rule should automatically be
applied to the subdirectories of the specified path. If you do not select Recursive, the rule will not
be applied to subdirectories.
If you need to remove a rule, highlight it in the list at the bottom and click Remove Rule. Be
careful when removing rules. Double-Take may create multiple rules when you are adding
directories. For example, if you add /home/admin to be included in protection, then /home will be
excluded. If you remove the /home exclusion rule, then the /home/admin rule will be removed
also.
Click OKto return to the Edit Job Properties page.
If you remove data from your workload and that data has already been sent to the target,
you will need to manually remove that data from the target. Because the data you
removed is no longer included in the replication rules, Double-Take orphan file detection
cannot remove the data for you. Therefore, you have to remove it manually.
The following table identifies the controls and the table columns in the Job logs window.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Job logs window. The messages are still logged to their respective files on
the server.
Copy
This button copies the messages selected in the Job logs window to the Windows
clipboard.
Clear
This button clears the Job logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
on the target. This option will shutdown the source machine (if it is online), stop the
protection job, and start the replica virtual machine on the target with full network
connectivity.
l Perform test failoverThis option is not applicable to full server to ESX appliance jobs.
l Failover to a snapshotThis option is not available for full server to ESX appliance jobs.
3. Select how you want to handle the data in the target queue.
l Discard data in the target queues and failover or cutover immediatelyAll of the
data in the target queue will be discarded and failover will begin immediately. The
advantage to this option is that failover will occur immediately. The disadvantage is that any
data in the target queue will be lost.
4. When you are ready to begin failover, click Failover, Cutover, or Recover.
If you need to update DNS after failover, there is a sample DNSupdate script located in
/etc/DT/sysprep.d. You may need to modify the script for your environment. If you need basic
assistance with script modifications, contact technical support. Assistance with advanced
scripting will be referred to Professional Services.