0% found this document useful (0 votes)
30 views

Teradata Data Mover Installation, Configuration, and Upgrade Guide For Customers

Uploaded by

pintadus
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Teradata Data Mover Installation, Configuration, and Upgrade Guide For Customers

Uploaded by

pintadus
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

What would you do if you knew?

Teradata Data Mover


Installation, Configuration, and Upgrade Guide
for Customers
Release 16.10
B035-4102-067K
June 2017
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.

Teradata, Aster, BYNET, Claraview, DecisionCast, IntelliBase, IntelliCloud, IntelliFlex, QueryGrid, SQL-MapReduce, Teradata Decision Experts,
"Teradata Labs" logo, Teradata ServiceConnect, and Teradata Source Experts are trademarks or registered trademarks of Teradata Corporation or its
affiliates in the United States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
Amazon Web Services, AWS, Amazon Elastic Compute Cloud, Amazon EC2, Amazon Simple Storage Service, Amazon S3, AWS CloudFormation, and
AWS Marketplace are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Avro, Apache Hadoop, Apache Hive, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the
Apache Software Foundation in the United States and/or other countries.
Apple, Mac, and OS X all are registered trademarks of Apple Inc.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda Access,
Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and Maximum Support
are servicemarks of Axeda Corporation.
CENTOS is a trademark of Red Hat, Inc., registered in the U.S. and other countries.
Cloudera and CDH are trademarks or registered trademarks of Cloudera Inc. in the United States, and in jurisdictions throughout the world.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are either registered trademarks or trademarks of EMC Corporation in the United States and/or
other countries.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, IBM Spectrum Protect, and z/OS are trademarks or registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United States and
other countries.
NetVault is a trademark of Quest Software, Inc.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Quantum and the Quantum logo are trademarks of Quantum Corporation, registered in the U.S.A. and other countries.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAP is the trademark or registered trademark of SAP AG in Germany and in several other countries.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
Sentinel® is a registered trademark of SafeNet, Inc.
Simba, the Simba logo, SimbaEngine, SimbaEngine C/S, SimbaExpress and SimbaLib are registered trademarks of Simba Technologies Inc.
SPARC is a registered trademark of SPARC International, Inc.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Veritas, the Veritas Logo and NetBackup are trademarks or registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other
countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
The information contained in this document is provided on an "as-is" basis, without warranty of any kind, either express or
implied, including the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. Some
jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to you. In no event will
Teradata Corporation be liable for any indirect, direct, special, incidental, or consequential damages, including lost profits or
lost savings, even if expressly advised of the possibility of such damages.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are not
announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features, functions,
products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions, products, or services
available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated without
notice. Teradata Corporation may also make improvements or changes in the products or services described in this information at any time without
notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this document.
Please e-mail: [email protected]
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata Corporation
will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform, create derivative
works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, Teradata Corporation will be free
to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose whatsoever, including developing, manufacturing, or
marketing products or services incorporating Feedback.
Copyright © 2015 - 2017 by Teradata. All Rights Reserved.
Table of Contents

Preface...................................................................................................................................................................5
Purpose............................................................................................................................................................................ 5
Audience..........................................................................................................................................................................5
Revision History.............................................................................................................................................................5
Additional Information.................................................................................................................................................5
Teradata Support............................................................................................................................................................6
Product Safety Information.......................................................................................................................................... 6

Chapter 1:
Overview............................................................................................................................................................. 7
Dependencies..................................................................................................................................................................7
Best Practices for Data Mover Networking................................................................................................................ 9

Section I: Installing and Configuring Software

Chapter 2:
Configuring the Environment..............................................................................................13
Configuring the Data Mover Daemon..........................................................................................................13
The daemon.properties File................................................................................................................13
Configuration Properties.................................................................................................................... 15
Configuring the Data Mover Agent.............................................................................................................. 21
Installing and Configuring the Data Mover Agent on a Linux Teradata Server.........................21
The agent.properties File.....................................................................................................................21
Configuring the Data Mover Command-Line Interface............................................................................ 23
Configuring the Data Mover Command-Line Interface on a Linux Teradata Server................23
Installing and Configuring the Data Mover Command-Line Interface on Non-Teradata
Servers....................................................................................................................................................23
The commandline.properties File......................................................................................................25
Configuring the Data Mover REST Service..................................................................................................27
About Configuring High Availability........................................................................................................... 27
Configuring Automatic Failover........................................................................................................27
High Availability Configuration Scenario............................................................................ 28
Verifying Data Mover Package Installation......................................................................... 29
Example: Verifying Data Mover Package Installation........................................................ 30
Setting Up Host Files or DNS.................................................................................................30
Example: Setting Up Host Files or DNS............................................................................... 30
Verifying Required Ports Open..............................................................................................30
Example: Verifying Required Ports are Open......................................................................31

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 3
Table of Contents

Defining Unique Data Mover Agent Names........................................................................31


Example: Defining Unique Data Mover Agent Names...................................................... 32
Synchronizing the Master and Slave Repositories...............................................................32
Example: Synchronizing the Master and Slave Repositories............................................. 33
Configuring Dual Active Java Message Service (JMS) Brokers......................................... 34
Example: Configuring the Dual Active Java Message Service (JMS) Brokers................. 35
Configuring the Sync Service................................................................................................. 36
Example: Configuring the Sync Service................................................................................ 36
Configuring the Cluster and Starting the Monitoring Service.......................................... 37
Failover.properties File................................................................................................37
Example: Configuring the Cluster and Starting the Monitoring Service......................... 38
Checking the Status of Master and Slave Components...................................................... 39
Example: Checking the Status of the Master and Slave Components.............................. 39
Starting the Synchronization Service.....................................................................................39
Example: Starting the Synchronization Service................................................................... 40
Verifying Failover Configuration.......................................................................................... 40
Example: Verifying the Failover Configuration...................................................................40
Completing the Automatic Failover Setup........................................................................... 41
Configuring the Synchronization Service Without Automatic Failover......................................41
Configuring the Synchronization Service.............................................................................41
Configuring Data Mover to Use Teradata Ecosystem Manager................................................................41
Configuring Multiple Managed Servers........................................................................................................43
Configuring Data Mover to Log to Server Management............................................................................43
Enabling Logging Server Management Alerts When a Failover Occurs......................................43
Configuring Data Mover Managed Server to Increase Network Throughput........................................44
About Adding Source and Target COP Entries...............................................................................44
About Defining Routes for Source and Target COP Entries......................................................... 45
Restarting the Network....................................................................................................................... 45
About Verifying the Route Changes................................................................................................. 45
Data Mover Log Files.......................................................................................................................................46
Data Mover Properties Files Preserved During Upgrades......................................................................... 46

Chapter 3:
Administrative Tasks.......................................................................................................................49
Data Mover Components Script.................................................................................................................... 49
Changing DBC and DATAMOVER Passwords on the Data Mover Server............................................49
Creating a Diagnostic Bundle for Support................................................................................................... 51

Section II: Upgrading Software

Chapter 4:
Upgrading Software..........................................................................................................................57
About Upgrading Data Mover Software.......................................................................................................57
Creating an Incident........................................................................................................................................ 57
Upgrading the Data Mover Command-Line Interface on Non-Teradata Servers................................. 58
Upgrading the Data Mover Agent on a Linux Teradata Server................................................................ 60

Teradata Data Mover


4 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Preface

Purpose
This guide provides customer information and procedures for installing, configuring, and upgrading
Teradata Data Mover software.

Audience
This guide is intended for use by:
• System administrators
• Database administrators and relational database developers
• Customers
• Teradata Customer Support

Revision History
Date Release Description
June 2017 16.10 Initial release.

Additional Information
Related Links

URL Description
https://access.teradata.com Use Teradata Support to access Orange Books, technical alerts, and
knowledge repositories, view and join forums, and download software
packages.
http://www.teradata.com External site for product, service, resource, support, and other customer
information.

Related Documents
Documents are located at http://www.info.teradata.com.

Title Publication ID
Teradata Data Mover User Guide B035-4101
Describes how to use the Teradata Data Mover portlets and command-line interface.
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 5
Preface
Teradata Support

Title Publication ID
Parallel Upgrade Tool (PUT) Reference B035-5713
Describes how to install application software using PUT.
Teradata Viewpoint User Guide B035-2206
Describes the Teradata Viewpoint portal, portlets, and system administration features.

Customer Education
Teradata Customer Education delivers training for your global workforce, including scheduled public
courses, customized on-site training, and web-based training. For information about the classes, schedules,
and the Teradata Certification Program, go to www.teradata.com/TEN/.

Customer Support
Customer support is available around-the-clock, seven days a week through the Global Technical Support
Center (GSC). To learn more, go to https://access.teradata.com.

Teradata Support
Teradata customer support is located at https://access.teradata.com.

Product Safety Information


This document may contain information addressing product safety practices related to data or property
damage, identified by the word Notice. A notice indicates a situation which, if not avoided, could result in
damage to property, such as equipment or data, but not related to personal injury.

Example

Notice:
Improper use of the Reconfiguration utility can result in data loss.

Teradata Data Mover


6 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
CHAPTER 1
Overview

Dependencies
Data Mover Server Requirements

Software Level
Operating System SUSE Linux Enterprise Server 11 SP 3
Internal Repository Teradata Database 15.10
Development Tools teradata-jre7-1.7.0_121-tdcl and later

Note:
For non-Teradata servers, you must install or upgrade to JRE 7
before installing or upgrading any components.

Data Mover Component Version Requirement


As of Data Mover 16.00, only the major and minor versions of the Data Mover daemon and the Data Mover
command line interface or portlet must match. For instance:
Command Line Interface/Portlet Daemon Version Supported
Version
16.00.01.00 16.00.02.00 Yes
15.11.00.00 16.00.00.00 No

Note:
The Data Mover agent must be the exact same version as the Data Mover daemon.

External Component Requirements


The versions of components that Data Mover works with are listed below.
Software Level
Teradata Database Versions 14.00–16.10
Hortonworks Data Platform for Versions 2.3.x, 2.4, and 2.5
Commodity Hardware
Hortonworks Data Platform for Versions 2.3.x, 2.4, and 2.5
Teradata Appliance for Hadoop
Teradata Connector for Hadoop Version 1.5.x
Teradata Aster Database Versions 6.0, 6.10, and 6.20

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 7
Chapter 1: Overview
Dependencies

Software Level
Teradata Viewpoint Version 16.00 or 16.10
Cloudera Distribution of Hadoop Version 5.4.x, 5.7, 5.8, and 5.9
for Commodity Hardware
Cloudera Distribution of Hadoop Version 5.4.x, 5.7, 5.8, and 5.9
for Teradata Appliance for
Hadoop

Required Permissions
You must be a root user to install and configure Data Mover components.

Required Open Ports on the Data Mover Server


The ports listed below must be open for incoming and outgoing traffic on the Data Mover server:
Port Number Used By
22 SSH
1025 CLI and JDBC
25268 ARC access module
25168 ARC server
61616 ActiveMQ
25368 Master sync service
1080 RESTful API

Open Ports to Access Hadoop Services


The ports listed below are the default Hadoop ports to open for Data Mover to access each Hadoop service.
These ports are configurable on Hadoop.
Port Number Used By
50111 WebHCat
11000 Oozie
50070 WebHDFS
10000 Hive JDBC
9083 Hive metastore
14000 HttpFS

Required Open Ports to Access Aster


The port listed below must be open on the Aster Queen node for Data Mover to access the Aster system:
Port Number Used By
2406 Aster JDBC

Teradata Data Mover


8 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 1: Overview
Best Practices for Data Mover Networking

Best Practices for Data Mover Networking


A comprehensive knowledge article about Data Mover networking best practices is available to help you
understand and resolve a variety of performance issues, including test and validation procedures for
suggested changes.
The Data Mover Networking Best Practices knowledge article is located at http://cks.teradata.com/
8525621800464274/0/41a6f00b2ff7e06485257d330008e2df.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 9
Chapter 1: Overview
Best Practices for Data Mover Networking

Teradata Data Mover


10 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
SECTION I
Installing and Configuring Software

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 11
Section I
Installing and Configuring Software

Teradata Data Mover


12 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
CHAPTER 2
Configuring the Environment

Configuring the Data Mover Daemon


1. Edit the daemon.properties file and restart the Data Mover daemon to implement the changes.
For properties that can be set dynamically, the changes take effect one minute after the updated
daemon.properties file is saved. There is no need to restart the daemon service if you are only
updating dynamic properties.
2. Use the list_configuration and save_configuration commands to modify the other Data
Mover properties.

The daemon.properties File


Property Description Default Value
arcserver.port=port A long-lived server port on the machine running the 25168
DMDaemon, which is used for inbound socket
connections from DM Agents.
broker.port=port The port number of the machine where the Java 61616
Message Service (JMS) message broker is listening.
broker.url=url The hostname or IP address of the machine running localhost
the Java Message Service (JMS) message broker.
cluster.enabled=setting When set to True, establishes a connection to a False
for cluster secondary Java Message Service (JMS) broker in case
the primary JMS broker fails.
viewpoint.url The hostname or IP address for the Viewpoint http://
Authentication server. localhost
Example: viewpoint.url=http://localhost
viewpoint.port The port number for the Viewpoint Authentication 80
server.
Example: viewpoint.port=80
logger.useTviLogger=sett The Server Management logger can be set to true or True
ing for TVI messages false. If set to true, fatal error messages are sent to
Server Management. Dynamic property.1
jobExecutionCoordinator. The maximum number of jobs allowed to run on the 20
maxConcurrentJobs=maximu daemon at the same time. Additional jobs are placed on
m number of jobs the queue and run when slots become available.
Dynamic property. 1

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 13
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default Value


jobExecutionCoordinator. The maximum number of jobs allowed in the job 20
maxQueuedJobs=maximum queue. Additional jobs are placed in a higher level
number of jobs allowed memory queue until slots are available in the job queue.
in queue Dynamic property. 1n
log4j.appender.logfile=o Informs the logging application to use a specific
rg.apache.log4j.RollingF appender.
ileAppender It is recommended that this property value not be
changed.
log4j.appender.logfile.f The relative or absolute path of the log file. If changing dmDaemon.log
ile=file path name log file location, specify the absolute path of the file. For
Windows, specify back slash instead of forward slash,
for example, C:\Program File\Teradata\Log
\dmDaemon.log.
Both file path and file name can be set dynamically.1
log4j.appender.logfile.l A dynamic property. 1
ayout=org.apache.log4j.P
atternLayout Note:
Do not edit. This is an internal setting for logging
infrastructure.

log4j.appender.logfile.m The number of backup logging files that are created. 3


axBackupIndex=<number of After the maximum number of files has been created,
backup files> the oldest file is erased. Dynamic property. 1
Example
If Max Backups = 3, three backup logs are created:
• dmDaemon.log.1
• dmDaemon.log.2
• dmDaemon.log.3
If the current dmDaemon.log size exceeds 10MB, it
rolls to become the new dmDaemon.log.1 and a new
dmDaemon.log is created. The previous
dmDaemon.log.2 becomes the new
dmDaemon.log.3. The previous dmDaemon.log.3 is
deleted.
log4j.appender.logfile.m The maximum size of the logging file before being 10MB
axFileSize=<maximum size rolled over to backup files. Dynamic property. 1
of log files>
log4j.appender.logfile.l The pattern of the log file layout, where: %d [%t] %-5p
ayout.ConversionPattern= %c{3} - %m%n
• d = date
<log file pattern
layout> • t = thread
• p = log level
• c = class name

Teradata Data Mover


14 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default Value

• m = message
• n = platform dependent line separator
Dynamic property when layout is PatternLayout.1
Information for creating a layout is at: http://
logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/
PatternLayout.html
log4j.rootLogger=<level The six levels of logging, TRACE | DEBUG | INFO | INFO,logfile
of logging> WARN | ERROR | FATAL. From trace level to
application error. LOG_LEVEL can be updated
dynamically, but not logfile.1
Value is:
LOG_LEVEL, logfile

Note:
Do not remove the term logfile.

querygrid.manager.urls=u The hostname and IP address for the QueryGrid 9443


rl Manager servers. Supports up to two URLs, separated
by commas.
Example: querygrid.manager.urls=https://
host1:9443,https://host2:9443

If the Viewpoint Authentication server has HTTPS enabled, you can set the following if you want to
authenticate via HTTPS instead: viewpoint.url to https://localhost and viewpoint.port to
443.
1Forproperties that can be set dynamically, the changes take effect one minute after the updated
daemon.properties file is saved. There is no need to restart the daemon service if you are only updating
dynamic properties. For example:
• If you changed the value of log4j.rootLogger from the default of INFO, logfile to DEBUG,
logfile, any debug messages generated would start appearing in the log file one minute after saving the
updated properties file.
• If you changed the value of jobExecutionCoordinator.maxConcurrentJobs from the default
value of 20 to a new value of 25, the new value of 25 would take effect one minute after saving the
updated daemon.properties file.

Configuration Properties
Property Description Default
agentCollector.agentHeartbeat Sets the amount of time in milliseconds to wait for an 600000
WaitMillis agent heartbeat before assuming it has gone out of
service.
blocked.job.maxAllowedLimit The maximum number of jobs that can be marked as 5
BLOCKED and retried. If a job is detected as blocked
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 15
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default


when the blocked.job.maxAllowedLimit has
already been reached, the job is added to the Job
Queue.
The value cannot be greater than 25% of the
maximum concurrent job limit.
blocked.job.retry.enabled When set to True, detects any locks on the source/ False
target objects being moved and retries running the job
after a specified interval.
blocked.job.retry.interval Sets an interval to retry running any jobs blocked 1 HOUR
because of locks on source/target objects.
Time unit can be specified as HOURS or MINUTES.
blocked.job.retry.maxInterval Sets the maximum interval for attempting to start any 1 HOUR
jobs blocked because of locks on source/target objects.
Jobs are marked as FAILED after this interval is
exceeded if they are still blocked.
Time unit can be specified as HOURS or MINUTES.
daemon.default.compareDDL.ena Enables/disables the default compareDDL behavior at
bled the daemon level.
databaseQueryService.useBaseV Sets all data dictionary queries on Teradata source and True
iewsOnly target systems to use the base views instead of X or VX
views.
deadlock.retry.enabled When set to True, if an SQL query execution fails False
with DBS error (2631) because of a deadlock, retries
executing the query after a specified interval.
deadlock.retry.interval The interval during which to retry executing an SQL 1 MINUTE
query that fails with a DBS deadlock error (2631).
Time unit can be specified as SECONDS or MINUTES.
deadlock.retry.maxAttempts The maximum number of attempts to retry executing 10
an SQL query that fails with a DBS deadlock error
(2631).
different.session.charsets.en Determines whether or not specifying different source False
abled and target session character sets in a job is allowed.
Default value False means this is not allowed.
event.table.default Default event table in which to save event details. NULL
Events are sent to this event table by default when
tmsm.mode is either BOTH or
ONLY_INTERNAL_TMSM. Individual jobs can use a
different event table by using the
log_to_event_table job definition parameter.
Multiple values can be set as follows:
<value>event1</value>
<value>event2</value>

Teradata Data Mover


16 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default


hadoop.connector.max.task.slo Specifies the maximum number of concurrent Hadoop 2
t Connector tasks.
hadoop.default.mapper.export Specifies the number of mappers for Hadoop to 8
Teradata jobs. This property only used when
hadoop.default.mapper.type is DataMover.
Default is 8.
hadoop.default.mapper.import Specifies the number of mappers for Teradata to 20
Hadoop jobs. This property is only used when
hadoop.default.mapper.type is DataMover.
hadoop.default.mapper.type Determines which product decides the default number DataMover
of mappers for a Hadoop job. Possible values are TDCH
and DataMover.
hanging.job.check.enabled If enabled, an internal process awakens periodically Disabled
and reviews active jobs to see if any are hanging.
hanging.job.check.rate Rate at which to check for hanging jobs (in hours). 1 HOUR
hanging.job.timeout.acquisiti If the progress of a new job is not reported within this 1 HOUR
on period (in hours), the job is stopped.
Timeout is specifically for the acquisition phase.
hanging.job.timeout.large.app If the progress of a new job is not reported within this 4 HOURS
ly period (in hours), the job is stopped.
Timeout specifically for the TPTAPI apply phase for a
large object.
hanging.job.timeout.large.bui If the progress of a new job is not reported within this 4 HOURS
ld period (in hours), the job is stopped.
Timeout specifically for the ARC build phase for a
large object.
hanging.job.timeout.large.ini If the progress of a new job is not reported within this 4 HOURS
tiate period (in hours), the job is stopped.
Timeout specifically for the initiate phase for a large
object.
hanging.job.timeout.medium.ap If the progress of a new job is not reported within this 2 HOURS
ply period (in hours), the job is stopped.
Timeout specifically for the TPTAPI apply phase for a
medium object.
hanging.job.timeout.medium.bu If the progress of a new job is not reported within this 2 HOURS
ild period (in hours), the job is stopped.
Timeout specifically for the ARC build phase for a
medium object.
hanging.job.timeout.medium.in If the progress of a new job is not reported within this 2 HOURS
itiate period (in hours), the job is stopped.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 17
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default


Timeout specifically for the initiate phase for a
medium object.
hanging.job.timeout.range.lar Defines the minimum size (in MB, GB, TB, or default 10 GB
ge.min GB if the unit is not provided) for an object to be
considered a large object.
hanging.job.timeout.range.sma Defines the maximum size (in MB, GB, TB, or default 5 MB
ll.max MB if the unit is not provided) for an object to be
considered a small object.
hanging.job.timeout.small.app If the progress of a new job is not reported within this 1 HOUR
ly period (in hours), the job is stopped.
Timeout specifically for the TPTAPI apply phase for a
small object.
hanging.job.timeout.small.bui If the progress of a new job is not reported within this 1 HOUR
ld period (in hours), the job is stopped.
Timeout specifically for the ARC build phase for a
small object.
hanging.job.timeout.small.ini If the progress of a new job is not reported within this 1 HOUR
tiate period (in hours), the job is stopped.
Timeout specifically for the initiate phase for a small
object.
job.allowCommandLineUser When set to True, the daemon allows CommandLine False
requests when the security level is Daemon.
job.databaseClientEncryption When set to True, utilities such as ARC, JDBC, and False
TPTAPI initiate encrypted sessions to both the source
and target database systems.

Note:
Performance decreases when encryption is
initiated.

job.default.queryband Provides a set of name/value pairs to be used as the Applicatio


default query band for all jobs. nName=DM;V
ersion=16.
00
job.default.queryband.enabled Enable to use of the default query band features. False
job.force.direction Forces the direction of data movement from source to
target system.
job.never.target.system Prevents certain database systems from ever being a False
target system in a Data Mover job.
job.onlineArchive When set to True, online archiving is used for objects False
that merit the use of ARC.

Teradata Data Mover


18 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default

Note:
Performance decreases when this setting is used for
object availability.

job.overwriteExistingObjects When set to True, objects that already exist on the False
target database system are overwritten.
job.securityMgmtLevel The level of security management enabled. Valid Job
choices are Daemon and Job.
job.useGroupUserIdPool Defines a set of system names and credentials. When None
creating a job, this group user id pool can be used for
the source or target in place of directly specifying
credentials in the job.
job.useSecurityMgmt When set to True, some Data Mover commands False
require the admin username and password to be
specified when executing the command. For a
complete list of commands affected by this parameter,
see the Teradata Data Mover User Guide.
job.useSyncService Records any changes to the Data Mover repository False
tables (inserts/updates/deletes) in an audit log table.
The value must be set to True to use the Sync service.
job.useUserIdPool Uses a target user from the pool of users. This enables
the running of multiple arc tasks at the same time.
queryGridManagerEncryptedPass Sets the QueryGrid Manager user-encrypted
word password. Cannot be combined with
queryGridManagerPassword.
queryGridManagerPassword Sets the QueryGrid Manager user password. Cannot
be combined with the
queryGridManagerEncryptedPassword.
queryGridManagerUser Sets the QueryGrid Manager user. Support
querygrid.wait.final.status When set to True, the system waits for QueryGrid False
Manager to return the final task status. Setting to
True may impact system performance.
repository.purge.definition.e Enables the automated purging of job definitions. False
nabled
repository.purge.enabled Enables/disables the repository purge feature. The False
default value False means purging is disabled.
repository.purge.history.unit The unit for job history data to be kept in the Days
repository before purging should occur.
The current supported values are Days, Weeks,
Months, and Years.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 19
Chapter 2: Configuring the Environment
Configuring the Data Mover Daemon

Property Description Default


repository.purge.history.unit The number of units for job history data to be kept in 60
count the repository before purging should occur.
This value is combined with the value for
repository.purge.history.unit to determine
the amount of time before purging should occur for
old jobs (for example, 60 days, 3 years, or 10 months).
The value of -1 disables the purging by time.
repository.purge.hour The hour when the daily repository purging should 1
start. Default value 1 means 1 am.
repository.purge.minute The minute when the daily repository purging should 0
start.
repository.purge.percent The percentage of repository permspace that needs to 50
be available to determine when purging should occur.
The default value 50 means the repository should be
purged when more than 50% of the available
permspace is in use. The value of -1 disables the
purging by percentage.
sqlh.max.task.slot Specifies the limit for maximum number of 2
concurrent T2H tasks.
system.default.database.enabl Enables/disables the default target/staging databases at False
ed the system level. The default value False means
disabled.
target.system.load.slots Controls the total number of load slots that Data 5
Mover can use at one time on target Teradata systems.
tmsm.frequency.bytes Controls the frequency of messages sent to Teradata 2147483647
Ecosystem Manager when using byte-based utilities BYTES
(for example, ARC).

Note:
Providing a low value can hurt performance.
Teradata recommends using the default value.

tmsm.mode Controls how Data Mover directs Teradata Ecosystem None


Manager messages. Possible values are BOTH,
ONLY_REAL_TMSM,
ONLY_INTERNAL_TMSM, and NONE. When set to
BOTH, messages are sent to the real Teradata
Ecosystem Manager and written to the TDI event
tables.

Teradata Data Mover


20 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover Agent

Configuring the Data Mover Agent


1. Edit the agent.properties file located in the /etc/opt/teradata/datamover directory and
restart the Data Mover agent to implement the changes.
For properties that can be set dynamically, the changes take effect one minute after the updated
agent.properties file is saved. There is no need to restart the agent service if you are only updating
dynamic properties.

Installing and Configuring the Data Mover Agent on a Linux


Teradata Server
1. Enter ./dminstallupgradeagent at the command line to install the DMAgent and TTU packages.
2. Answer the prompts as needed, press Enter to accept the defaults where appropriate.
3. Enter rpm -qa |grep DMAgent to verify the installation.

The agent.properties File


Property Description Default Value
agent.id=id Unique identifier for this agent. Agent1
arc.port=port number Port number that can be used by Teradata ARC 25268
to manage ARC streams.
cluster.enabled=setting When set to True, establishes a connection to a False
for cluster secondary Java Message Service (JMS) broker in
case the primary JMS broker fails.
broker.port=port number The port number of the machine where the Java 61616
Message Service (JMS) Message Broker is
listening.
broker.url=url The hostname or IP address of the machine localhost
running the Java Message Service (JMS) Message
Broker.
log4j.appender.logfile=org Informs the logging application to use a specific
.apache.log4j.RollingFileA appender.
ppender It is recommended that this property value not
be changed.
log4j.appender.logfile.fil Relative or absolute path of the log file. If dmAgent.log
e=log file path changing log file location, specify the absolute
path of the file. For Windows, specify back slash
instead of forward slash, for example,
C:\ProgramFile\Teradata\Log
\dmAgent.log.
Both file path and file name can be set
dynamically.1

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 21
Chapter 2: Configuring the Environment
Configuring the Data Mover Agent

Property Description Default Value


log4j.appender.logfile.lay Dynamic property. 1
out=org.apache.log4j.Patte
rnLayout Note:
Do not edit. This is an internal setting for
logging infrastructure.

log4j.appender.logfile.max The number of backup logging files that are 3


BackupIndex=number of created. After the maximum number of files has
backup logging files been created, the oldest file is erased. Dynamic
property. 1
Example
If Max Backups = 3, three backup logs are
created:
• dmAgent.log.1
• dmAgent.log.2
• dmAgent.log.3
If the current dmAgent.log size exceeds 10MB,
it rolls to become the new dmAgent.log.1 and
a new dmAgent.log is created. The previous
dmAgent.log.2 becomes the new
dmAgent.log.3. The previous
dmAgent.log.3 is deleted.
log4j.appender.logfile.max The maximum size of the logging file before 10MB
FileSize=maximum size of being rolled over to backup files. Dynamic
log files property. 1
log4j.appender.logfile.lay The pattern of the log file layout, where: %d [%t] %-5p
out.ConversionPattern=log %c{3} - %m%n
• d = date
file pattern layout
• t = thread
• p = log level
• c = class name
• m = message
• n = platform dependent line separator
Dynamic property. 1 Information for creating a
layout is at: http://logging.apache.org/log4j/1.2/
apidocs/org/apache/log4j/PatternLayout.html
log4j.rootLogger=level of Six levels of logging, TRACE | DEBUG | INFO | INFO,logfile
logging WARN | ERROR | FATAL. From trace level to
application error. LOG_LEVEL can be updated
dynamically, but not logfile.1
Value is:
LOG_LEVEL, logfile

Teradata Data Mover


22 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover Command-Line Interface

Property Description Default Value

Note:
Do not remove the term logfile.

agent.maxConcurrentTasks=m The maximum number of tasks allowed to run 5


aximum number of tasks on this agent at the same time.
Note that tasks are distributed to agents using a
round robin method. Task size is not currently
considered, so load may not be balanced if one
agent is randomly assigned larger tasks than
another.
logger.useTviLogger=settin The TVI logger can be set to true or false. If True
g for TVI messages set to true, fatal error messages are sent to TVI.
Dynamic property. 1

1Forproperties that can be set dynamically, the changes take effect one minute after the updated
daemon.properties file is saved. There is no need to restart the daemon service if you are only updating
dynamic properties. For example:
• If you changed the value of log4j.rootLogger from the default of INFO, logfile to DEBUG,
logfile, any debug messages generated would start appearing in the log file one minute after saving the
updated properties file.
• If you changed the value of agent.maxConcurrentTasks from the default value of 5 to a new value of
6, the new value of 6 would take effect one minute after saving the updated agent.properties file.

Configuring the Data Mover Command-Line


Interface

Configuring the Data Mover Command-Line Interface on a


Linux Teradata Server
The Data Mover Command-Line Interface is installed for Linux Teradata servers with PUT. Configure the
command line properties to customize these settings.
1. Edit the commandline.properties file located in the /etc/opt/teradata/datamover directory
to customize the command line properties.

Installing and Configuring the Data Mover Command-Line


Interface on Non-Teradata Servers
The Data Mover Command-Line Interface must be installed for Linux on non-Teradata servers, Windows,
Solaris Sparc, Ubuntu, and IBM AIX systems using the following procedure. You cannot use PUT to install
the Command-Line Interface on those systems.
Steps 1 through 4 do not apply to installation on Windows systems.
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 23
Chapter 2: Configuring the Environment
Configuring the Data Mover Command-Line Interface
1. Add the following lines of code to the end of the /etc/profile file to update the JAVA_HOME and
PATH environment variables for all users:
export JAVA_HOME={full path of java installation location}
export PATH=$JAVA_HOME/bin:$PATH
2. Run the command:
source /etc/profile
3. Verify that the output shows JRE.1.7:
java -version
4. Open the .profile file of the root user and verify that the values for the JAVA_HOME and PATH
environment variables are the same as those defined in /etc/profile.
If the values are different, the java -version command will not produce the correct output during
install time, and the installation will fail.
5. Install the appropriate DMCmdline software package for your system as follows:

Operating Actions
System
Linux (for a. At the command line, type export DM_INTERACTIVE_INSTALL=1 to set the
non- environment variable for interactive install.
Teradata b. At the command line, type the following:
servers)
gunzip DMCmdline__linux_i386.16.10.00.00.tar.gz tar xvf
DMCmdline__linux_i386.16.10.00.00.tar
cd DMCmdline.16.10*
rpm -Uvh DMCmdline__linux_noarch.16.10.00.00-1.rpm
c. Answer the prompts as needed and press Enter to accept the defaults where
appropriate.
d. Type rpm -qa |grep DMCmdline to verify the installation.

Windows a. Copy the Data Mover directory on the media to a folder on the hard drive.
b. Go to DataMover/Windows and unzip tdm-
windows__windows_i386.16.10.00.00.zip.
c. Go to the DISK1 directory and run setup.exe.
d. Answer the prompts as needed and press Next to accept defaults where appropriate.
e. Click Install when finished.
f. Go to Start > Control Panel > Add or Remove Programs to verify installation.

Solaris a. At the command line, type the following to install:


Sparc
gunzip tdm-solaris__solaris_sparc.16.10.00.00.tar.gz
tar xvf tdm-solaris__solaris_sparc.16.10.00.00.tar
pkgadd -d 'pwd' DMCmdline
b. Answer the prompts as needed and press Enter to accept defaults where appropriate.
c. Type pkginfo -l DMCmdline to verify the installation.

IBM AIX a. At the command line, type the following to install:

Teradata Data Mover


24 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover Command-Line Interface
Operating Actions
System

gunzip tdm-aix__aix_power.16.10.00.00.tar.gz
tar xvf tdm-aix__aix_power.16.10.00.00.tar
installp -acF -d ./DMCmdline DMCmdline
b. Answer the prompts as needed and press Enter to accept defaults where appropriate.
c. Type lslpp -l "DM*" to verify the installation.

Ubuntu a. At the command line, type export DM_INTERACTIVE_INSTALL=1 to set the


environment variable for interactive install.
b. At the command line, type the following:
tar xzvf tdm-dm-ubuntu__ubuntu.16.10.00.00.tar.gz
cd DMCmdline.16.10.00.00
dpkg -i DMCmdline__ubuntu_all.16.10.00.00-1.deb

Note:
In Ubuntu, -i is used for both install and upgrade.
c. Answer the prompts as needed and press Enter to accept the defaults where
appropriate.
d. Type dpkg -l |grep dmcmdline to verify the installation.

6. If the broker URL needs to be changed, edit the commandline.properties file located in the
TDM_install_directory\CommandLine\commandline.properties directory after installation.
7. Specify the broker URL and broker port number for communicating with the JMS bus.
The broker URL value is the machine name or IP address of the machine where ActiveMQ runs. The
broker port value should also be the same as the port number that ActiveMQ uses. The defaults are
broker.url=localhost and broker.port=61616.

The commandline.properties File


Property Description Default Value
cluster.enabled= When set to True, establishes a connection to a secondary False
setting for cluster Java Message Service (JMS) broker in case the primary JMS
broker fails.
broker.port=<port> The port number of the machine where the Java Message 61616
Service (JMS) Message Broker is listening.
broker.url=<url> The hostname or IP address of the machine running the Java localhost
Message Service (JMS) Message Broker.
log4j.appender.logfile= Informs the logging application to use a specific appender.
org.apache.log4j.Rollin It is recommended that this property value not be changed.
gFileAppender
log4j.appender.logfile. Relative or absolute path of the logfile. If changing log file dmCommandLi
file=<file path name> location, specify the absolute path of the file. For Windows, ne.log
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 25
Chapter 2: Configuring the Environment
Configuring the Data Mover Command-Line Interface

Property Description Default Value


specify back slash instead of forward slash, for example,
C:\Program File\Teradata\Log
\dmCommandLine.log.
log4j.appender.logfile.
layout=org.apache.log4j Note:
.PatternLayout Do not edit. This is an internal setting for logging
infrastructure.

log4j.appender.logfile. The number of backup logging files that are created. After 3
maxBackupIndex=<number the maximum number of files has been created, the oldest
of backup files> file is erased.
Example
If Max Backups = 3, three backup logs are created:
• dmCommandLine.log.1
• dmCommandLine.log.2
• dmCommandLine.log.3
If the current dmCommandLine.log size exceeds 10MB, it
rolls to become the new dmCommandLine.log.1 and a new
dmCommandLine.log is created. The previous
dmCommandLine.log.2 becomes the new
dmCommandLine.log.3. The previous
dmCommandLine.log.3 is deleted.
log4j.appender.logfile. The maximum size of the logging file before being rolled 10MB
maxFileSize=<maximum over to backup files.
size of log files>
log4j.appender.logfile. The pattern of the log file layout, where: %d [%t]
layout.ConversionPatter %-5p %c{3}
• d = date
n=<log file pattern - %m%n
layout> • t = thread
• p = log level
• c = class name
• m = message
• n = platform dependent line separator
Information for creating a layout is at: http://
logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/
PatternLayout.html
log4j.rootLogger=<level Six levels of logging, TRACE < DEBUG < INFO < WARN < INFO,logfil
of logging> ERROR < FATAL. From trace level to application error. e
Value is:
<LOG_LEVEL>, logfile

Note:
Do not remove the term logfile.

Teradata Data Mover


26 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring the Data Mover REST Service

Configuring the Data Mover REST Service


During installation of Data Mover, the DM REST component and REST Container are installed and started
automatically. You need to configure the tdmrest.properties file for your environment and restart the
service.
1. In the directory /etc/opt/teradata/datamover, locate tdmrest.properties.
2. Configure the following properties:
Property Description Default Value
broker.url=url Hostname or IP address of the machine running the localhost
Java Message Service (JMS) message broker.
broker.port=port Port number of the machine where the JMS message 61616
broker is listening.
cluster.enabled=setting When set to true, establishes a connection to a False
secondary JMS broker for the cluster if primary JMS
broker fails.
response.timeout If progress of job is not reported within this period (in 30 sec
seconds), the job is aborted.
3. Restart the Data Mover REST service:
/etc/init.d/tdrestd start

About Configuring High Availability


A High Availability configuration is the base configuration for a Data Mover system. If the primary
component of a system goes down, a High Availability configuration ensures that the system continues to
function with a secondary component.
This configuration depends on a monitoring service, which monitors the primary, or master, components
through SSH connections to see if services are running. If any of the main components are down, a failover
sequence begins the process of allowing the slave component to take over for the master component. The
daemon, agent, and Sync service that will be monitored by the monitoring service on the master and slave
components must be run using user dmuser. The monitoring service cannot be used to monitor the
daemon, agent, and Sync service components if a user other than dmuser has been set up to run these
services.

Configuring Automatic Failover


Data Mover provides automatic failover support when multiple Data Mover servers are configured in a dual
environment. Automatic failover configuration must meet the following requirements:
• Two additional monitoring severs are available to monitor the master and slave components. It is highly
recommended that you use a Viewpoint managed server for this purpose.
• Each monitoring server must be local to the site and ideally be attached to the same network as the
components being monitored to avoid automatic failovers caused by network partitions.
• The DMFailover package must be installed on all servers, including the master and slave daemon,
monitoring, and agent servers that are part of the cluster.
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 27
Chapter 2: Configuring the Environment
About Configuring High Availability
If additional monitoring servers are not available, you can enable failover by using the Data Mover
synchronization service. The synchronization service alone does not support automatic failover and requires
manual intervention to enable failover from master to slave components. If configuring the synchronization
service without configuring automatic failover, see Configuring the Synchronization Service Without
Automatic Failover.

Note:
When using the synchronization service with failover, use the public IP address of the system, not the
localhost hostname or the 127.0.0.1 IP address.

The following files, located in the /opt/teradata/client/nn.nn/datamover/failover/ folder, are


required when using the monitoring service. Where nn.nn in the filename refers to the major and minor
version numbers of Data Mover.
File Description
/etc/opt/teradata/ Specifies the master and slave components to be monitored
datamover/
failover.properties
/opt/teradata/client/ Script for setting up SSH log on, configuring the servers in master and slave
nn.nn/datamover/ modes, starting and stopping the monitoring service, and checking the status of
failover/dmcluster the master and slave components
/opt/teradata/client/ Executable binary file used for automatic failover
nn.nn/datamover/
failover/
DMFailover.jar
/etc/opt/teradata/ Specifies if Server Management alerts need to be sent if the monitoring service
datamover/ detects a failure. This file is used only on the monitoring server.
monitor.properties

The following tasks must be performed to configure automatic failover:


1. Verifying Data Mover Package Installation
2. Setting Up Host Files or DNS
3. Verifying Required Ports Open
4. Defining Unique Data Mover Agent Names
5. Synchronizing the Master and Slave Repositories
6. Configuring Dual Active Java Message Service (JMS) Brokers
7. Configuring the Sync Service
8. Configuring the Cluster and Starting the Monitoring Service
9. Checking the Status of Master and Slave Components
10. Starting the Synchronization Service
11. Verifying Failover Configuration

High Availability Configuration Scenario


To configure high availability successfully, each step must be performed in order as they appear in this
section. A use-case example to demonstrate the configuration process is provided following each step. The
following table defines the terms used during the failover configuration process.

Teradata Data Mover


28 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
About Configuring High Availability

Example Server Server Type Description


Name
DM1 Original-master The system primarily used as the master
system before failover occurred. Typically, the
server where the primary Data Mover daemon
runs, along with one Data Mover agent.
DM2 Designated-slave The slave system assigned to take responsibility
as the master system if the original-master
stops working. If the DM1 server fails, this is
the server that becomes the next master. In the
meantime, one Data Mover agent runs here.
DM3 Additional agent Server where one Data Mover agent runs. Not
a slave server.
VP1 Orignal-master monitor Server where the primary Data Mover monitor
runs. This monitor continuously checks that
DM1 is running properly and initiates failover
to DM2 if necessary.
VP2 Designated-slave monitor Server where the secondary Data Mover
monitor runs. This monitor only becomes
active if failover occurs. If failover occurs, this
monitor continuously checks that DM2 is
running properly and initiates failover to DM1
if necessary.

Verifying Data Mover Package Installation


1. Verify the original-master, designated-slave, and slave-only servers have the following required packages
installed:
DMAgent-XX.XX.XX.XX-1
DMCmdline-XX.XX.XX.XX-1
DMDaemon-XX.XX.XX.XX-1
DMFailover-XX.XX.XX.XX-1
DMSync-XX.XX.XX.XX-1
Servers must be one of the following server types:
• Data Mover Teradata Managed Server
• Consolidated Teradata Managed Server with Data Mover
2. Verify the additional agent server has the DMAgent-XX.XX.XX.XX-1 package installed.
Server must be one of the following server types:
• Data Mover Teradata Managed Server
• Consolidated Teradata Managed Server with Data Mover
• Cloud-based instance of Data Mover (for example, Teradata Managed Cloud or Amazon AWS)
3. Verify the original-master monitor and the designated-slave monitor have the DMFailover-
XX.XX.XX.XX-1 package installed.
Viewpoint Teradata Managed Server is recommended.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 29
Chapter 2: Configuring the Environment
About Configuring High Availability
Example: Verifying Data Mover Package Installation
1. On DM1 and DM2, verify that the following packages are installed:
• DMAgent-XX.XX.XX.XX-1
• DMCmdline-XX.XX.XX.XX-1
• DMDaemon-XX.XX.XX.XX-1
• DMFailover-XX.XX.XX.XX-1
• DMSync-XX.XX.XX.XX-1
2. On DM3, verify that the DMAgent-XX.XX.XX.XX-1 package is installed.
3. On VP1 and VP2, verify that the DMFailover-XX.XX.XX.XX-1 package is installed.

Setting Up Host Files or DNS


You must create a unique alias for every server with DNS, or in the host files, to ensure that all servers in the
automatic failover cluster can connect to each other. The unique alias allows you to reconfigure automatic
failover without needing a new IP address when the IP address of a server changes.
1. Define all the servers in the cluster by creating a unique alias in the DNS or the host files.
IP addresses in /etc/hosts file should be defined if using host file method.
2. Add an entry to the /etc/hosts original-master and designated-slave servers to ensure that the local
server resolves to a publicly accessible IP address instead of 127.0.0.1.
This is to allow ARCMAIN to run on a remote Data Mover agent and connect its socket connection to
the Data Mover daemon.

Example: Setting Up Host Files or DNS

1. Define an alias for all five servers in the /etc/hosts file on each system (DM1, DM2, DM3, VP1, and
VP2), if not already defined.
For example, the following entries would be added to the host files on DM1:
• ##.##.###.## DM2
• ##.##.###.## DM3
• ##.##.###.## VP1
• ##.##.###.## VP2
2. On DM1 and DM2, add a public IP address to the /etc/hosts file so that the name of the server does
not resolve to the 127.0.0.1 IP address.
An example DM1 entry: ###.##.###.## DM1

Verifying Required Ports Open


Specific ports must be open on each server in the failover cluster. In most cases, if a default port is
unavailable, a different port can be assigned.
1. Use the following chart to verify the required ports are open in the failover cluster:
Port Number Server Type Used By
22 Original-master SSH
Designated-slave

Teradata Data Mover


30 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
About Configuring High Availability

Port Number Server Type Used By


Slave-only
Additional agent
Original-master monitor
Designated-slave monitor
1025 Original-master CLI and JDBC
Designated-slave
Slave-only
Additional agent
25268 Original-master ARC Access Module
Designated-slave
Slave-only
Additional agent
25168 Original-master ARC server
Designated-slave
61616 Original-master ActiveMQ
Designated-slave
25368 Original-master DM Sync Service
Designated-slave
1080 Original-master RESTful API
Designated-slave

Example: Verifying Required Ports are Open


1. Each port only needs to be open for the server listed:
Port Number Example Server Name Used By
22 DM1, DM2, DM3, VP1, VP2 SSH
1025 DM1, DM2, DM3 CLI and JDBC
25268 DM1, DM2, DM3 ARC Access Module
25168 DM1, DM2 ARC server
61616 DM1, DM2 ActiveMQ
25368 DM1, DM2 DM Sync Service
1080 DM1, DM2 RESTful API

Defining Unique Data Mover Agent Names


The agent.id is set to Agent1 by default. You must enter a unique name for the Data Mover agents when
there are multiple agents for a single Data Mover daemon.
1. Enter the unique name for the agent.id property in etc/opt/teradata/datamover/
agent.properties for the original-Master, designated-slave, slave-only, and additional-agent servers.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 31
Chapter 2: Configuring the Environment
About Configuring High Availability
Example: Defining Unique Data Mover Agent Names
1. On DM1, DM2, and DM3, edit the agent.properties file and provide a unique name for agent.id.

Synchronizing the Master and Slave Repositories


Before starting the synchronization service for the first time, the master and slave repositories need to be
synchronized. If not synchronized, the slave may not function correctly after it is switched to master mode.
When Sync service is enabled, make sure the customer port 25368 is open on the Data Mover TMS.
1. Check the current configuration settings on the original-master server:
/opt/teradata/client/nn.nn/datamover/failover/dmcluster status
If automatic failover is not configured, all components show as STOPPED and on localhost.
2. Stop the monitoring service on the master and slave monitoring systems if failover was previously
configured for the systems:
/opt/teradata/client/nn.nn/datamover/failover/dmcluster stopmonitor
Where nn.nn in the path refers to the major and minor version numbers of Data Mover.
3. Check for any jobs that are running:
datamove list_jobs -status_mode r
If any jobs are running, wait for them to complete. You can also stop the running jobs and run cleanup:
datamove stop -job_name [job-name]
datamove cleanup -job_name [job-name]
4. Shut down the synchronization system on the slave system:
/opt/teradata/sync/nn.nn/datamover/dmsync stop
Where nn.nn in the path refers to the major and minor version numbers of Data Mover.
5. Shut down the synchronization system on the master system:
/opt/teradata/sync/nn.nn/datamover/dmsync stop
Where nn.nn in the path refers to the major and minor version numbers of Data Mover.
6. Shut down the daemon if it is running on the slave system:
/etc/init.d/dmdaemon stop
7. Start the Data Mover services on the master system and wait two minutes until the services start:
/etc/init.d/tdactivemq start
/etc/init.d/dmagent start
/etc/init.d/dmdaemon start
8. Backup the master repository:
datamove backup_daemon
A folder with script files is generated in the /var/opt/teradata/datamover/daemon_backup
directory. Once Triggers.sql is generated, check the backup_script.output and
BackupTriggers.out for any possible errors.
To confirm processing is complete, run ls -al commands one after another to see if there are any
changes to the file sizes.
9. Copy the folder from the master to the slave system, as in the following example:
scp -r /var/opt/teradata/datamover/daemon_backup/backup_2016-07-05_13.22.41
dm-agent8:/var/opt/teradata/datamover/daemon_backup/
10. Shut down the daemon service on the master system:

Teradata Data Mover


32 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
About Configuring High Availability
/etc/init.d/dmdaemon stop
11. Start the Data Mover services on the slave systems and wait two minutes until the services start:
/etc/init.d/tdactivemq start
/etc/init.d/dmagent start
/etc/init.d/dmdaemon start
12. Change the owner of folders and files that were copied over to dmuser, as in the following example:
chown dmuser /var/opt/teradata/daemon_backup
chown dmuser /var/opt/teradata/daemon_backup_2016_07-05_13.22.41
chown dmuser /var/opt/teradata/daemon_backup_2016_07-05_13.22.41/*
13. Change the permissions of the files and folders that were copied over to 755, as shown in the following
example:
chmod 755 /var/opt/teradata/daemon_backup
chmod 755 /var/opt/teradata/daemon_backup_2016_07-05_13.22.41
chmod 755 /var/opt/teradata/daemon_backup_2016_07-05_13.22.41/*
14. Import the data of the master system repository into the slave system repository by running the
restore_daemon command, as shown in the following example:
datamove restore_daemon -backup_target_dir /var/opt/teradata/datamover/
daemon_backup/backup_2016-07-05_13.22.41
To check if the file sizes within this directory are changing or new files are still be created, use the ls -
al command. To check for any errors, use the temp*.out and restore_script.output
commands.
15. Check for and remove files that might have been generated previously by the synchronization system
from both the master and all slave systems.
These files are created under the path specified by the sql.log.directory property in the
sync.properties file.
cd /var/opt/teradata/datamover/logs/
rm dmSyncMaster.json
rm slave_*.lastread
rm dmSyncSlave.json
rm slave_sql.lastExecuted
16. Shut down the daemon on the slave system:
/etc/init.d/dmdaemon stop

Example: Synchronizing the Master and Slave Repositories


1. View current configuration and active components on DM1 or DM2:
/opt/teradata/client/nn.nn/datamover/failover/dmcluster status
Where nn.nn in the path refers to the major and minor version numbers of Data Mover.
Example output when environment has not been configured previously:
----------------------------------------------------------------------
| LOCAL CLUSTER |
-------------------------------------------------------------------------
| COMPONENT | HOST NAME | STATUS |
-------------------------------------------------------------------------
| DM Daemon | localhost | STOPPED |
| ActiveMQ | localhost | STOPPED |
| DM Monitoring Service | localhost | STOPPED |
| DM Sync Service | localhost | STOPPED |
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 33
Chapter 2: Configuring the Environment
About Configuring High Availability
| DM Agent | localhost | STOPPED |
|------------------------------------------------------------------------
| REMOTE CLUSTER |
-------------------------------------------------------------------------
| COMPONENT | HOST NAME | STATUS |
-------------------------------------------------------------------------
| DM Daemon | localhost | STOPPED |
| ActiveMQ | localhost | STOPPED |
| DM Monitoring Service | localhost | STOPPED |
| DM Sync Service | localhost | STOPPED |
| DM Agent | localhost | STOPPED |
-------------------------------------------------------------------------
2. Stop Data Mover monitoring services on VP1 and VP2 to avoid an automatic failover during the
configuration process.
3. Verify that no jobs are running on both the master (DM1) and slave (DM2) servers.
4. Stop the Data Mover Sync Service on DM1.
5. Stop the Data Mover Sync Service on DM2.
6. Stop the Data Mover daemon on DM2.
7. Start the Data Mover daemon on DM1.
8. Run the backup_daemon command to create a backup of the master (DM1) repository.

Note:
Running the backup command momentarily shuts down the master (DM1) daemon. It is
recommended that you analyze the backup files after the command completes to ensure no errors
occurred.
9. Stop the Data Mover daemon on the master (DM1) server.
10. Copy the backup directory on the master (DM1) server to the slave (DM2) server.
11. Start the Data Mover daemon on the slave (DM2) server to run the restore command.
12. Change the ownership of the backup files on the slave (DM2) server to dmuser.
13. Grant 755 permissions to the backup files on the slave (DM2) server.
14. Run the restore_daemon command to restore the master backup to the slave (DM2) repository.
It is recommended that you analyze the backup files after the command completes to ensure the restore
worked properly.
15. Delete the working files on the Data Mover Sync Service for the master (DM1) and slave (DM2)
servers.
Deleting these working files ensures that the Sync Service does not contain old changes when it is
restarted later in the failover configuration process.
16. Stop the Data Mover daemon on the slave (DM2) server.

Configuring Dual Active Java Message Service (JMS) Brokers


Dual active Java Message Service (JMS) brokers ensure that there is no loss of service if a primary JMS
broker is down. The files needed on both the local and remote sites to configure dual active JMS brokers are
as follows:
• daemon.properties
• agent.properties
• commandline.properties
Teradata Data Mover
34 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
About Configuring High Availability
1. Log onto the local daemon host and run the following command as root:
./dmcluster configactivemq -e true -s slaveDaemonHost-p 61616
2. Log on to the remote daemon host and run the following command as root:
./dmcluster configactivemq -e true -s masterDaemonHost -p 61616
Where:
Designation Description
-e Enable network of brokers configuration
-s Remote host/server name where the other
ActiveMQ instance is running
-p Port to connect to the remote ActiveMQ
3. Inspect TDActiveMQ logs to ensure no errors/warnings are present:
/var/opt/teradata/tdactivemq/logs/activemq.log
4. Enter cluster.enabled=true in daemon.properties, agent.properties, and
commandline.properties on the local and remote sites to enable connection to a secondary JMS
broker.
5. Edit the broker.url property in daemon.properties and commandline.properties on the
local and remote sites to specify a secondary JMS host.
For example, broker.url=primaryJmshost, secondaryJmsHost
On the local site, the primaryJmshost will be the master daemon host and the secondaryJmsHost will be
the slave daemon host. On the remote site, the primaryJmshost will be the slave daemon host and the
secondaryJmsHost will be the master daemon host.
6. [Optional] Run the following command on the local and remote daemon hosts to restore the ActiveMQ
to the standard, non-cluster configuration:
./dmcluster configactivemq -e false
7. [Optional] Inspect TDActiveMQ logs to ensure that TDActiveMQ is restarted in a standard (non-
cluster) configuration with no errors.
8. Edit the broker.url property in agent.properties on the local and remote sites to specify a
secondary JMS host.
For example, broker.url=primaryJmshost, secondaryJmsHost
Make sure that the order is the same for all agents, regardless of whether they are considered master or
slave. For example, all agents will have broker.url=primaryJmshost, secondaryJmsHost.
The primaryJmshost and secondaryJmsHost do not change based on site location. If the order is not the
same, Data Mover job functionality might be impacted.
9. Select Enable in portlets and enter the broker URL/port value for the secondary JMS broker in the Data
Mover Setup portlet to enable clustering when connecting with the daemon.
For example, dm-agent:61616, where dm-agent is the broker URL and 61616 is the port number.
The clustering field values must be set individually for each daemon.

Example: Configuring the Dual Active Java Message Service (JMS) Brokers
1. Run ./dmcluster configactivemq-e true -s DM2 - p61616 on the original-master (DM1).
2. Run ./dmclsuter configactivemq -e true -s DM1 -p 61616 on the designated-slave
(DM2).
3. Check TD ActiveMQ logs on DM1 and DM2 to verify the network was enabled:
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 35
Chapter 2: Configuring the Environment
About Configuring High Availability
Network connection between vm://TeradataActiveMQ#8 and tcp://DM2/
XX.XX.XXX.XX:61616@49353 (TeradataActiveMQ) has been established.
4. Set cluster.enabled=true in daemon.properties, agent.properties,
commandline.properties on DM1 and DM2.
5. Set cluster.enabled=true in agent.properties and commandline.properties on DM3.
6. Set broker.url=DM1, DM2 in daemon.properties and commandline.properties on DM1.
7. Set broker.url=DM2, DM1 in daemon.properties and commandline.properties on DM2.
8. Set broker.url=DM1, DM2 in commandline.properties on DM3.
9. Set broker.url=DM1, DM2 in agent.properties on DM1, DM2, and DM3.
10. [Optional] If you are using the Data Mover portlet, you can configure Dual Active Java Message Service
(JMS) Brokers through Viewpoint.
a) Log in to Viewpoint on VP1.
b) Open the Data Mover Setup portlet.
c) Locate and select the DM1 daemon from the list of daemons.
d) Verify the daemon IP/DNS points to DM1.
e) Select Enabled in portlets to enable clustering.
f) Set the broker URL/port to DM2:61616.
g) Click Apply to finish.

Configuring the Sync Service


1. Edit the sync.properties file on the original-master:

Property Value
master.port 25368
master.host Designated-slave

• Use the master.port default port, 25368, unless it is unavailable.


• The master.host setting is only used when Sync Service is running in slave mode. Set as the
designated-slave here in case failover occurs.
2. Edit the sync.properties file on the designated-slave and slave-only servers:

Property Value
master.port Use same value as the original-master.
master.host Specify as the original-master.

The sync.isMaster property does not need to be modified. The failover process is responsible for
starting the synchronization service as master or slave when a failover occurs.

Example: Configuring the Sync Service


1. Enter the following changes to the sync.properties file on DM1:

Property Value
master.port 25368
Teradata Data Mover
36 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
About Configuring High Availability
Property Value
master.host DM2

Use the master.port default port, 25368, unless it is unavailable.


2. Enter the following changes to the sync.properties file on DM2:

Property Value
master.port 25368
master.host DM1

Configuring the Cluster and Starting the Monitoring Service


The monitoring service uses SSH connections to the servers it monitors. This command typically takes a few
minutes to complete and does the following:
• Sets up the SSH logons for the monitoring services so that they can log on without a password to the
servers where the monitored components are installed.
• Stops all Data Mover services except ActiveMQ on the remote site.
• Configures and starts the local daemon, agents, and local sync service in master mode.
• Starts the remote sync service in slave mode.
• Starts the monitoring service on local.monitor.host to monitor the local Data Mover components.
1. Log on to the master daemon server and edit the /etc/opt/teradata/datamover/
failover.properties file for your system.
For details about the Failover.properties file, see Failover.properties File
2. Run the following command as root:
./dmcluster config

Note:
In a default installation, the master repository host is the same as the master daemon host and the
slave repository host is the same as the slave daemon host.

Failover.properties File
The Data Mover failover.properties file contains the files that control the failover process. When setting up
your failover process, edit these files according to the system you are configuring.
Property Name Description
local.daemon.host Host where the local (master) daemon runs.
remote.daemon.host Host where the remote (slave) daemon runs.
local.monitor.host Host where the monitoring service that monitors the local (master) services
runs.
remote.monitor.host Host where the monitoring service that monitors the remote (slave) services
runs.
local.repository.host Host where the repository used by the local daemon is installed.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 37
Chapter 2: Configuring the Environment
About Configuring High Availability

Property Name Description


This should be the same host the synchronization service is installed on. If
the repository is installed on the same server as the daemon, this value is the
same as local.daemon.host.
remote.repository.host Host where the repository used by the remote daemon is installed.
This should be the same host the synchronization service is installed on. If
the repository is installed on the same server as the daemon, this value is the
same as remote.daemon.host.
local.agents.host Host where the agents used by the local (master) daemon are installed. If
more than one agent is used, specify a comma-separated list of agents; the
order of the list does not matter.
remote.agents.host The hosts where the agents used by the remote (slave) daemon are installed.
If more than one agent is used, specify a comma-separated list of agents; the
order of the list does not matter.

If external agents are shared between the master and slave, the shared agent names must be specified for
both local.agents.host and remote.agents.host.

Example: Configuring the Cluster and Starting the Monitoring Service


The dmcluster config command sets up SSH connections between the DM1, DM2, DM3, VP1, and VP2
servers. These connections are used by the Data Mover monitoring service to monitor the master Data
Mover components and initiate a failover if any of the master components stop working.
1. Enter the following changes to the failover.properties file on the master (DM1) server:
• local.daemon.host=DM1
• remote.daemon.host=DM2
• local.monitor.host=VP1
• remote.monitor.host=VP2
• local.repository.host=DM1
• remote.repository.host=DM2
• local.agents.host=DM1, DM2, DM3
• remote.agents.host=DM1, DM2, DM3
2. Run./dmcluster config as root and follow the following prompts:
Prompt Action
Please enter the Root Password for Provide the root Linux password for DM1.
the server running Master Repository:
Please enter the Root Password for Provide the root Linux password for DM2.
the server running Slave Repository:
Please enter the password for Provide the root Linux password for DM1.
root@DM1(MASTER DAEMON):
Please enter the password for Provide the root Linux password for DM2.
root@DM2(SLAVE DAEMON):

Teradata Data Mover


38 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
About Configuring High Availability

Prompt Action
Please enter the password for Provide the root Linux password for VP1.
root@VP1(MASTER MONITOR):
Please enter the password for Provide the root Linux password for VP1.
root@VP2(SLAVE MONITOR):
Please enter the password for Provide the root Linux password for DM3.
root@DM3(Master Agent(s)):

Checking the Status of Master and Slave Components


1. Verify the status of the cluster:
./dmcluster status

Example: Checking the Status of the Master and Slave Components


1. View the newly configured output:
./dmcluster status
You should see something similar to the following:
------------------------------------------------------------------------
| LOCAL CLUSTER |
------------------------------------------------------------------------
| COMPONENT | HOST NAME | STATUS |
------------------------------------------------------------------------
| DM Daemon | DM1 | RUNNING |
| ActiveMQ | DM1 | RUNNING |
| DM Monitoring Service | VP1 | RUNNING |
| DM Sync Service | DM1 | MASTER |
| DM Agent | DM1 | RUNNING |
| DM Agent | DM2 | RUNNING |
| DM Agent | DM3 | RUNNING |
------------------------------------------------------------------------
| REMOTE CLUSTER |
------------------------------------------------------------------------
| COMPONENT HOST NAME | STATUS |
------------------------------------------------------------------------
| DM Daemon | DM2 | STOPPED |
| ActiveMQ | DM2 | RUNNING |
| DM Monitoring Service | VP2 | STOPPED |
| DM Sync Service | DM2 | SLAVE |
| DM Agent | DM1 | RUNNING |
| DM Agent | DM2 | RUNNING |
| DM Agent | DM3 | RUNNING |
------------------------------------------------------------------------

Starting the Synchronization Service


1. Run the list_configuration command on the master system to generate a file named
configuration.xml:
datamove list_configuration
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 39
Chapter 2: Configuring the Environment
About Configuring High Availability
2. Edit the configuration.xml file and set the value for the job.useSyncService property to true:
<property>
<key>job.useSyncService</key>
<value>true</value>
<description>
Purpose: To record any changes to the Data Mover repository
tables(inserts/updates/deletes) in an audit log table. The value
must be set to true in order to use the Sync service. Default:false.
</description>
</property>
3. Save the new setting:
datamove save_configuration -f configuration.xml

Example: Starting the Synchronization Service


1. Run the list_configuration command on the master server (DM1).
2. Edit the configuration.xml file and set the job.useSyncService property to TRUE.
3. Run the save_configuration command on the master server (DM1).
All repository changes that occur on the master server (DM1) will now be replicated to the slave server
(DM2).

Verifying Failover Configuration


1. Verify that all Data Mover agents are listed in the output on the original-master:
datamove list_agents
2. Verify that the output is the same as in Step 1 on the designated-slave:
datamove list_agents
3. Verify the daemon is not active on the designated-slave:
/etc/init.d/dmdaemon status
4. Run a Data Mover ARC job.
a) Set max_agents_per_task to the number of Data Mover agents in the cluster.
b) Set data_streams to a value greater than or equal to max_agents_per_task.
c) Check the job status to verify that the job ran successfully and that all Data Mover agents are utilized.
5. Run a Data Mover TPT job.
a) Set max_agents_per_task to the number of Data Mover agents in the cluster.
b) Set data_streams to a value greater than or equal to max_agents_per_task.
c) Check the job status to verify that the job ran successfully and that all Data Mover agents are in use.

Example: Verifying the Failover Configuration


1. On DM1, run datamove list_agents and verify that all three DM agents (DM1, DM2, and DM3)
are active.
2. On DM2, run datamove list_agents and verify the same output as Step 1.
Both DM1 and DM2 should be listed in the ActiveMQ connection log:
About to connect to ActiveMQ at DM1, DM2:61616
3. Run /etc/init.d/dmdaemon status on DM2 and verify that the output indicates NOT RUNNING.
Teradata Data Mover
40 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring Data Mover to Use Teradata Ecosystem Manager
4. Run a few jobs to verify that the cluster is working properly.

Completing the Automatic Failover Setup


1. Run ./dmcluster status to check the configuration status.

Configuring the Synchronization Service Without Automatic


Failover
You can configure the synchronization service to enable failover when automatic failover cannot be used.
The synchronization service uses the following files:
File Description
/etc/opt/teradata/datamover/ Settings that the Data Mover Replication Service uses
sync.properties for synchronizing master with slave repositories
/opt/teradata/datamover/sync/nn.nn/ Executable binary file used by the synchronization
DMReplication.jar services
/opt/teradata/datamover/sync/nn.nn/ Script for starting the synchronization service
dmsync

Where nn.nn in the path refers to the major and minor version numbers of Data Mover.

Configuring the Synchronization Service


1. Edit the sync.properties file on the Master Sync server and set the sync.isMaster property to
true.
2. Edit the sync.properties file on the Slave Sync server and set the sync.isMaster property to
false.
3. Set the master.port property in the sync.properties file on the Master and Slave Sync server as
the port through which the master and slave synchronization services communicate.
4. On the Master Sync server, start the synchronization service as Master:
/opt/teradata/datamover/sync/nn.nn/dmsync start
Where nn.nn in the path refers to the major and minor version numbers of Data Mover.
5. On the Slave Sync server, start the synchronization service as Slave:
/opt/teradata/datamover/sync/nn.nn/dmsync start
6. Once the Synchronization services start, enable synchronization by setting the job.useSyncService
configuration property to true on the Master Sync server.

Configuring Data Mover to Use Teradata


Ecosystem Manager
1. In the /etc/opt/teradata/datamover/apiconfig.xml file, edit the host and port properties
for the location of the Resilient Publisher.
2. Run the list_configuration command to output a configuration file.
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 41
Chapter 2: Configuring the Environment
Configuring Data Mover to Use Teradata Ecosystem Manager
3. Set the appropriate values for the configuration settings for your site.
Parameter Description Default
tmsm.frequency.bytes Controls the frequency, in number of bytes/MB/GB, of job 2147483647
progress events sent to Teradata Ecosystem Manager.

Note:
Using a low value can hurt performance. The
recommendation is to use the default value.

tmsm.mode Controls how Data Mover directs Teradata Ecosystem NONE


Manager messages.
Valid Values:
• BOTH
• ONLY_REAL_TMSM
• ONLY_INTERNAL_TMSM
• NONE
When set to:
• BOTH, messages are sent to the Teradata Ecosystem
Manager system and written to the table-driven
interface event tables.
• ONLY_INTERNAL_TMSM, Data Mover only writes
messages to the TMSMEVENT table defined by the
table-driven interface.
• ONLY_REAL_TMSM, Data Mover only sends
messages to the Teradata Ecosystem Manager system.
If Data Mover cannot send events to the real Teradata
Ecosystem Manager product then those events will be
stored in a store.dat file located in the opt/
teradata/client/em/dataStore/store.dat
directory. If the value for tmsm.mode is BOTH or
ONLY_REAL_TMSM, and Data Mover cannot send events
to the real Teradata Ecosystem Manager product, then the
store.dat file can grow to be very large. To prevent the
store.dat file from taking up too much disk space on the
Data Mover managed server, change the value for
tmsm.mode to ONLY_INTERNAL_TMSM or NONE, or
make sure Data Mover can send events to the real Teradata
Ecosystem Manager product.

For more information about Teradata Ecosystem Manager, see the Teradata Ecosystem Manager User
Guide.

Teradata Data Mover


42 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring Multiple Managed Servers

Configuring Multiple Managed Servers


Having more than one Data Mover managed server in the environment can improve performance when
copying data from one Teradata Database system to another. Each Data Mover managed server can have one
or more Data Mover components running on it.
If the Data Mover agent must be run on a system other than the Data Mover daemon, the host name for the
server running the Data Mover daemon must be resolved to a publicly-accessible IP address in the /etc/
hosts file.
If only agents are running on the additional Data Mover managed servers, they must be configured to work
with the Data Mover managed server that has the Data Mover daemon running on it.
When using multiple Data Mover agents, each Data Mover agent must have a unique Agent ID.
1. Provide the correct Apache Active MQ broker url and port number values in one of the following
ways:
• During installation of the Data Mover agent component on the Data Mover managed server
• After installation by modifying the broker.url and broker.port in the agent.properties file
where ActiveMQ runs.
2. Edit the Agent ID property in the agent.properties file.
3. Restart the Data Mover agent service to implement the changes.

Configuring Data Mover to Log to Server


Management
The logger.useTviLogger property in the agent.properties and daemon.properties files
configures Teradata Data Mover to log to Server Management. Because the property defaults to True,
logging to Server Management is automatic and enables critical failures to be reported to Teradata
immediately.
1. Log on to the agent and daemon servers and set the logger.useTviLogger property in the
daemon.properties file and the logger.useTviLogger property in the agent.properties file
to True

Enabling Logging Server Management Alerts When a Failover


Occurs
1. Log on to the local and remote monitoring servers: local.monitor.host and
remote.monitor.host.
2. On /opt/teradata/client/nn.nn/datamover/failover/monitor.properties, do the
following:
• Set the value for monitor.useTviLogger as True.
• Make sure that tvilogger.properties exists and has been configured with the correct Server
Management logging method.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 43
Chapter 2: Configuring the Environment
Configuring Data Mover Managed Server to Increase Network Throughput

Configuring Data Mover Managed Server to


Increase Network Throughput
All network traffic coming into and out of the DM managed server goes through the default Ethernet port
for the server unless it is specifically routed. If the default Ethernet port is used for all network
communication, the other network ports on the DM managed server are wasted. This could cause the
network to slow down when processing Data Mover jobs, which could lead to poor performance when
copying data.
Data Mover jobs execute much faster if multiple Ethernet ports are used when copying data between
Teradata Database systems. The recommended way to increase network throughput on the DM managed
server is to set up specific network routes for all of the COP entries on the source and target Teradata
Database systems in the Data Mover jobs. A COP entry is the IP address of a Teradata Database node. These
specific network routes allow the DM Agent to connect TCP sessions to the source and target systems using
different Ethernet ports on the DM managed server. This improves performance by distributing data across
all available network ports.
The topics in this section describe how to set up the routes using a 2-node Teradata Database system called
dmdev as a source and a 2-node Teradata Database system called dmtest as target. The examples in this
section assume the network ports eth4 and eth5 are connected and available for use on the DM managed
server.

Note:
More than two ports on the DM managed server could be available in a customer environment. The
examples in this section use only 2-node source and target systems and two available network ports on
the DM managed server.

1. Add the IP addresses for all source and target COP entries in the /etc/hosts file on the DM managed
server.
2. Define the specific routes for the COP entries in the /etc/sysconfig/network/routes file on the
DM managed server.
3. Restart the network on the DM managed server.
4. Verify the route changes are in place on the DM managed server.

About Adding Source and Target COP Entries


The best way to define the IP addresses for the source and target COP entries is to configure them through
DNS. The example below defines the IP addresses for the source and target COP entries in the /etc/hosts
file instead because it is easier to explain all of the steps this way.
The IP addresses (COP entries) for all nodes on the source and target systems are placed in the /etc/hosts
file so the DM Agent can resolve them when executing a job. Assume the IP addresses of the two nodes on
dmdev are 153.64.209.91 and 153.64.209.92, respectively, and the IP addresses of the two nodes on dmtest
are 153.64.106.78 and 153.64.106.79, respectively, we add the following entries to the /etc/hosts file on
the DM managed server:
# COP entries for dmdev
153.64.209.91 dmdev dmdevcop1
153.64.209.92 dmdev dmdevcop2

# COP entries for dmtest

Teradata Data Mover


44 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Configuring Data Mover Managed Server to Increase Network Throughput
153.64.106.78 dmtest dmtestcop1
153.64.106.79 dmtest dmtestcop2
The COP entries for the source and target systems are now in the /etc/hosts file.

About Defining Routes for Source and Target COP Entries


Next, the network routes for the COP entries can be added to the /etc/sysconfig/network/routes
file. Assume the eth2 interface is used for all public network traffic to and from the DM managed server and
is, therefore, the default network interface for the server. Assume the IP address 153.64.107.254 is the
gateway for all traffic coming into and out of the DM managed server. The following is added to the /etc/
sysconfig/network/routes file on the DM managed server to add specific routes for the COP entries
on dmdev and dmtest:
# default XXX.XXX.XXX.XXX - ethX
default 153.64.107.254 - eth2

# routes to system dmdev


153.64.209.91 153.64.107.254 - eth4
153.64.209.92 153.64.107.254 - eth4

# routes to system dmtest


153.64.106.78 153.64.107.254 - eth5
153.64.106.79 153.64.107.254 - eth5
These entries force all network traffic between the DM managed server and dmdev to use the eth4 interface
and all network traffic between the DM managed server and dmtest to use the eth5 interface.

Restarting the Network


The network on the DM managed server must be restarted for the changes in the /etc/sysconfig/
network/routes file to take effect.

Notice:
Be sure to check that restarting the network will not negatively affect any other users on the server prior
to executing this command.

1. Run the rcnetwork restart command to restart the network on the DM managed server.

About Verifying the Route Changes


The new routes configured can be verified with the ip or netstat commands. Following are example outputs
of these commands when the routes have been configured properly:
# ip route list
153.64.209.92 via 153.64.107.254 dev eth4
153.64.106.78 via 153.64.107.254 dev eth5
153.64.106.79 via 153.64.107.254 dev eth5
153.64.209.91 via 153.64.107.254 dev eth4
127.0.0.0/8 dev lo scope link
default via 153.64.107.254 dev eth2

# netstat -rn
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 45
Chapter 2: Configuring the Environment
Data Mover Log Files
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
153.64.209.92 153.64.107.254 255.255.255.255 UGH 0 0 0 eth4
153.64.106.78 153.64.107.254 255.255.255.255 UGH 0 0 0 eth5
153.64.106.79 153.64.107.254 255.255.255.255 UGH 0 0 0 eth5
153.64.209.91 153.64.107.254 255.255.255.255 UGH 0 0 0 eth4
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
0.0.0.0 153.64.107.254 0.0.0.0 UG 0 0 0 eth2

Data Mover Log Files


Data Mover log files are moved to the /var/opt/teradata/datamover/logs directory as follows:
• dmDaemon.log
• dmAgent.log
• dmSync.log
• dmFailover.log
• upgrade_backup.log
During Data Mover installation and upgrade, the log files are preserved with up to 10 backups. For example,
dmdaemon-postinstall.log backups are preserved as dm-daemon-postinstall.log.1, dmdaemon-
postinstall.log.2, etc., up to dmdaemon-postinstall.log10, where the most recent file is
dmdaemon-postinstall.log.1. The following log files, with date and timestamp details added, are
preserved during installations and upgrades:
• /tmp/dmdaemon-postinstall.log
• /tmp/dmagent-postinstall.log
• /tmp/put-dmschemaupgrade.log
• /tmp/put-dmlistagents.log
• /var/opt/teradata/datamover/logs/upgrade_backup.log
Log files for the Data Mover RESTful API are located in the /var/opt/teradata/rest/daemon/logs
directory as follows: tdrestd.log
The log file /tmp/dmdaemon-preupgradecheck.log captures the output of the Data Mover pre-
upgrade check.

Data Mover Properties Files Preserved During


Upgrades
For DataMover 16.00.00.00 and later, settings from the following properties files are preserved during
upgrades:
• /etc/opt/teradata/datamover/agent.properties
• /etc/opt/teradata/datamover/commandline.properties
• /etc/opt/teradata/datamover/daemon.properties
• /etc/opt/teradata/datamover/failover.properties
• /etc/opt/teradata/datamover/monitor.properties
• /etc/opt/teradata/datamover/sync.properties
Teradata Data Mover
46 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 2: Configuring the Environment
Data Mover Properties Files Preserved During Upgrades
• /etc/opt/teradata/datamover/tdmrest.properties

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 47
Chapter 2: Configuring the Environment
Data Mover Properties Files Preserved During Upgrades

Teradata Data Mover


48 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
CHAPTER 3
Administrative Tasks

Data Mover Components Script


Data Mover includes a single script that enables you to check status of, start, or stop each Data Mover
Component. The script name is dm-control.sh, and is installed by the Data Mover daemon package on
the Data Mover Managed Server in the directory: /opt/teradata/datamover/daemon/16.10.
The script includes the following commands:
Script Command Description
dm-control.sh status Displays status of Data Mover Daemon, Data Mover
Agent, Data Mover Sync, tmsmonitor, Teradata
Database, and Teradata ActiveMQ.
dm-control.sh start Starts all Data Mover components on the local Data
Mover Managed Server. This includes Data Mover
daemon, Data Mover agent, Teradata ActiveMQ,
and tmsmonitor. This does not include the Data
Mover sync service or the failover monitoring
service.
dm-control.sh stop Stops all Data Mover components on the local Data
Mover Managed Server. This includes Data Mover
daemon, Data Mover agent, Teradata ActiveMQ,
and tmsmonitor. This does not include the Data
Mover sync service or the failover monitoring
service.
dm-control.sh restart Stops and starts all Data Mover components on the
local Data Mover Managed Server. This includes
Data Mover daemon, Data Mover agent, Teradata
ActiveMQ, and tmsmonitor. This does not include
the Data Mover sync service or the failover
monitoring service.

Changing DBC and DATAMOVER Passwords on


the Data Mover Server
Data Mover versions 15.11.04 and later include the script changepassword.sh that allows the root user to
change the DBC and DATAMOVER passwords from the Data Mover server. When executed, the script logs
into the internal TD repository and changes the repository passwords. The script is part of the tdm-linux
bundle and its location is determined by where the tdm-linux bundle is located. For example:

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 49
Chapter 3: Administrative Tasks
Changing DBC and DATAMOVER Passwords on the Data Mover Server
/var/opt/teradata/packages/DataMover/16.10.00.00/changepassword.sh

Note:
If you previously used dm.job.production.password in the daemon.properties file to change
the DATAMOVER password, remove it from daemon.properties before running the
changepassword.sh script.

Note:
We recommend that you run the script before installing or upgrading Data Mover. If you run it after
Data Mover has been installed or upgraded, you must restart the daemon to ensure Data Mover runs
properly.

1. Do one of the following:

Script Option Description


Interactive a. Run the script without arguments:
# changepassword.sh
b. When prompted, enter the old and new passwords.

Non- a. Run the script with the arguments -o, -p, -m, and -d. For example:
Interactive
# changepassword.sh -o old dbc password -p new dbc password -
m old datamover password -d new datamover password, where:
• old dbc password is the existing password for the DBC user
• new dbc password is the new password for the DBC user
• old datamover password is the existing password for the DATAMOVER
user
• new datamover password is the new password for the DATAMOVER user

2. If you ran the changepassword.sh script after Data Mover was installed or upgraded, restart the
daemon as follows:
a. # /etc/init.d/dmdaemon stop
b. # /etc/init.d/dmdaemon start
3. Verify the Data Mover components are ready to use by issuing the following commands:
• datamove list_jobs
• datamove list_agents

Script Example

location:/var/opt/teradata/packages/DataMover160000 # ./changepassword.sh
Do you want to change the DBC password of the Teradata internal repository?
(yes/no/y/n)?
yes
----------------------------------------------------------------
Change DBC Default Password
----------------------------------------------------------------
Old Password:
Retype Old Password:
New Password:
Retype New Password:
Do you want to change the DATAMOVER password of the Teradata internal
Teradata Data Mover
50 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 3: Administrative Tasks
Creating a Diagnostic Bundle for Support
repository?(yes/no/y/n)?
y
----------------------------------------------------------------------
Change DATAMOVER Default Password
----------------------------------------------------------------------
Old Password:
Retype Old Password:
New Password:
Retype New Password:
04/19/16 16:01:15 JDBC driver loaded.

04/19/16 16:01:15 Attempting to connect to Teradata via the JDBC driver...


04/19/16 16:01:16 User dbc is connected.
04/19/16 16:01:16 Connection to Teradata established.

04/19/16 16:01:16 Modify user dbc as password=*********


*******************************************************************************
***************
DBC default Password changed successfully
*******************************************************************************
***************
04/19/16 16:01:17 JDBC driver loaded.

04/19/16 16:01:17 Attempting to connect to Teradata via the JDBC driver...


04/19/16 16:01:18 User dbc is connected.
04/19/16 16:01:18 Connection to Teradata established.

04/19/16 16:01:18 Modify user datamover as password=*********


*******************************************************************************
***************
DATAMOVER default Password changed successfully
*******************************************************************************
***************
*******************************************************************************
***************
DBC and DATAMOVER passwords are encrypted and stored in /etc/opt/teradata/
datamover/password.txt file
Granting permission to dmuser to access password.txt

Creating a Diagnostic Bundle for Support


For Data Mover situations such as job failure, job hanging, or other issues that require an incident report,
Teradata includes command-line, interactive scripts for collecting necessary job and system information.
The resulting diagnostic bundle enables Teradata Customer Support to provide optimum analysis and
resolution. Customer support is available around-the-clock, seven days a week through the Global Technical
Support Center (GSC). To learn more, go to https://access.teradata.com.
1. Create a support incident including the following settings:

Option Setting
Product Area System Management Utilities
Problem Type Teradata Data Mover
2. Record the incident number and leave it open to attach the diagnostic bundle.
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 51
Chapter 3: Administrative Tasks
Creating a Diagnostic Bundle for Support
Note:
The interactive script prompts you to enter the incident number and other information related to the
issue.
3. As the root user, locate the scripts at /opt/teradata/datamover/support/ for every Data Mover
server in your environment, and do the following:

Server Type Description


Data Mover Server Run dmsupport.sh to create a diagnostic bundle.
Server Running Only Data Mover Agent Run dmagentsupport.sh to create a diagnostic bundle.

Be sure to include relevant problem descriptions for troubleshooting as prompted.


The dmsupport.sh script collects the following information from the Data Mover log files on the Data
Mover managed server:
• Activemq queue information
• Recent temp and task directories
The script creates three output files:

Output File Contents


datamover-job- • Data Mover health information
status • Data Mover and TTU packages rpm information
• List of total and failed Data Mover jobs
• List of job steps for failed jobs

datamover- All data Mover properties files, including the following:


properties
• List of files from the Data Mover components installation directory
• ps aux command output

datamover-server- OS, kernel, CPU, memory, and disk space information, including the
details following:
• jstack and jmap output for tdactivemq, daemon, and agent processes

The dmagentsupport.sh file collects the following information from a server running only the Data
Mover agent:
• Data Mover log files from the agent server
• Recent temp and task directories
The dmagentsupport.sh script creates a data-mover-agent-support output file, which contains
the following information:
• Data Mover agent.properties files
• List of files from the DataMover components installation directory
• OS, kernel, CPU, memory, and disk space information
• Data Mover and TTU packages rpm information
After the script collects the data, a bundle named DataMover-$currentdate-$hostname-1.zip is
created in /var/opt/teradata/datamover/support/incidentnumber.

Teradata Data Mover


52 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 3: Administrative Tasks
Creating a Diagnostic Bundle for Support
Note:
If the bundle size is larger than 49 MB, additional .zip files are created as follows:
• DataMover-$currentdate-$hostname-2.zip
• DataMover-$currentdate-$hostname-3.zip
4. Update the incident, browse to the resulting .zip files, attach the resulting files to the incident, and
submit them.
5. Contact Teradata Customer Support when the diagnostic bundle is ready for review, and include your
incident number for reference.
6. [Optional] If you do not wish to keep the .zip files, delete them from the directory /var/opt/
teradata/datamover/support/incidentnumber on the Data Mover server.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 53
Chapter 3: Administrative Tasks
Creating a Diagnostic Bundle for Support

Teradata Data Mover


54 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
SECTION II
Upgrading Software

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 55
Section II
Upgrading Software

Teradata Data Mover


56 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
CHAPTER 4
Upgrading Software

About Upgrading Data Mover Software


To upgrade the Teradata Data Mover DMCmdline software package on non-Teradata servers, do the
following:
1. Create an incident on Teradata at Your Service.
2. Uninstall the DMCmdline package on your operating system.
3. Reinstall the DMCmdline package on your operating system.
4. Contact your Customer Service Representative.

Note:
All Data Mover upgrades except the DMCmdline package are performed by Teradata Customer Support.

Note:
The DMFailover package from the monitoring servers should be the same version as the Failover package
from the Data Mover managed server.

Creating an Incident
You must obtain an Incident number from the Teradata Support prior to performing any software upgrades.
1. On your Windows PC, open a web browser and go to the Teradata Support at https://access.teradata.com
and log in.
2. Click Create Incident.
3. Make the following selections as appropriate:

Note:
Click the green arrows to move forward or back in the selection screens.

Option Description
Site The site where the activity will be performed.
Priority The Priority of the issue.
Product Area The area of the system where the issue is located or where work will be performed.
Problem Type A list of Problem Types.
Synopsis A short Synopsis of the activity being performed.
Description A short description of the activity being performed.
4. Click Submit.
5. Click OK to confirm the submission of the incident.
Teradata Data Mover
Installation, Configuration, and Upgrade Guide for Customers Release 16.10 57
Chapter 4: Upgrading Software
Upgrading the Data Mover Command-Line Interface on Non-Teradata Servers
The incident is added to the list of Incidents. The Status shows Pending (a blue circle) until Customer
Services accepts the incident, at which time the Status becomes Active (a green circle).

Upgrading the Data Mover Command-Line


Interface on Non-Teradata Servers
The Data Mover Command-Line Interface must be installed for Solaris Sparc, IBM AIX, Linux on non-
Teradata servers, Ubuntu, and Windows systems using the following procedures. You cannot use PUT to
install the Command-Line Interface on those systems.

Note:
If there is an existing installation on the system, it must be uninstalled before re-installing. You can have
only one version of the Data Mover Command-Line package on a server.
Steps 1 through 4 do not apply to installation on Windows systems.

Note:
As of Data Mover 16.00, only the major and minor versions of the Data Mover daemon and the Data
Mover command line interface or Data Mover portlet must match.

1. Add the following lines of code to the end of the /etc/profile file to update the JAVA_HOME and
PATH environment variables for all users:
export JAVA_HOME={full path of java installation location}
export PATH=$JAVA_HOME/bin:$PATH
2. Run the command:
source /etc/profile
3. Verify that the output shows JRE.1.7:
java -version
4. Open the .profile file of the root user and verify that the values for the JAVA_HOME and PATH
environment variables are the same as those defined in /etc/profile.
If the values are different, the java -version command will not produce the correct output during
install time, and the installation will fail.
5. Copy the properties file to an outside directory if you want to preserve any customization that you made
to the default values:
TDM_install_directory\CommandLine\commandline.properties
6. Uninstall and upgrade the appropriate software for your system as follows:

Operating Actions
System
Linux (for a. At the command line, type export DM_INTERACTIVE_INSTALL=1 to set the
non- environment variable for interactive install.
Teradata b. At the command line, type the following:
servers)
gunzip DMCmdline__linux_i386.16.10.00.00.tar.gz
tar xvf DMCmdline__linux_i386.16.10.00.00.tar

Teradata Data Mover


58 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 4: Upgrading Software
Upgrading the Data Mover Command-Line Interface on Non-Teradata Servers
Operating Actions
System

cd DMCmdline.16.10*
rpm -Uvh DMCmdline__linux_noarch.16.10.00.00-1.rpm
c. Answer the prompts as needed, and press Enter to accept the defaults where
appropriate.
d. Type rpm -qa |grep DMCmdline to verify installation.

Solaris a. At the command line, type pkgrm DMCmdline to uninstall.


Sparc b. At the command line, type the following to upgrade:
gunzip tdm-solaris__solaris_sparc.16.10.00.00.tar.gz
tar xvf tdm-solaris__solaris_sparc.16.10.00.00.tar
pkgadd -d 'pwd' DMCmdline
c. Answer the prompts as needed and press Enter to accept defaults where appropriate.
d. Type pkginfo -l DMCmdline to verify installation.

IBM AIX a. At the command line, type installp -u DMCmdline to uninstall.


b. At the command line, type the following to upgrade:
gunzip tdm-aix__aix_power.16.10.00.00.tar.gz
tar xvf tdm-aix__aix_power.16.10.00.00.tar
installp -acF -d ./DMCmdline DMCmdline
c. Answer the prompts as needed and press Enter to accept defaults where appropriate.
d. Type lslpp -l "DM*" to verify installation.

Windows a. To uninstall the existing DMCmdline software package, go to Start > Control Panel >
Add or Remove Programs; then, select Teradata Data Mover Command Line Interface
and click Remove.
b. Copy the Data Mover directory on the media to a folder on the hard drive.
c. Go to DataMover/Windows and unzip tdm-
windows__windows_i386.16.10.00.00.zip.
d. Go to the DISK1 directory and run setup.exe.
e. Answer the prompts as needed and press Next to accept defaults where appropriate.
f. Click Install when finished.
g. Go to Start > Control Panel > Add or Remove Programs to verify installation.

Ubuntu a. At the command line, type dpkg -P dmcmdline to uninstall.


• The commandline.properties file is preserved as
commandlind.properties.dpkgsave in the /opt/teradata/client/
16.10/datamover/commandline directory, and you can ignore the following
warning:
Warning: while removing dmcmdline, directory /opt/teradata/
client/16.10/datamover/commandline is not empty so not
removed

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 59
Chapter 4: Upgrading Software
Upgrading the Data Mover Agent on a Linux Teradata Server
Operating Actions
System

• If you do not want to preserve the properties file, you can remove the /opt/
teradata/client/16.10/datamover/commandline folder after uninstall is
completed.
b. At the command line, type the following:
tar xzvf tdm-dm-ubuntu__ubuntu.16.10.00.00.tar.gz
cd DMCmdline.16.10.00.00
dpkg -i DMCmdline__ubuntu_all.16.10.00.00-1.deb

Note:
In Ubuntu, -i is used for both install and upgrade.
c. Type dpkg -l |grep dmcmdline to verify the installation.

7. Restore the values from the properties file you copied to an outside directory if you want to preserve any
customization that you made to the default values and override the values introduced by the patch
TDM_install_directory\CommandLine\commandline.properties
8. Specify the broker URL and broker port number for communicating with the JMS bus.
The broker URL value is the machine name or IP address of the machine where ActiveMQ runs. The
broker port value should also be the same as the port number that ActiveMQ uses. The defaults are
broker.url=localhost and broker.port=61616.

Upgrading the Data Mover Agent on a Linux


Teradata Server
1. Copy the properties file to an outside directory if you want to preserve any customization that you made
to the default values:
TDM_install_directory\agent\agent.properties
2. Uninstall and upgrade the appropriate software for your system as follows:

Operating System Actions


Linux (for non- a. At the command line, type the following to upgrade the DMAgent and TTU
Teradata servers) packages:
./dminstallupgradeagent
b. Answer the prompts as needed, and press Enter to accept the defaults where
appropriate.
c. Type rpm -qa |grep DMAgent to verify the installation.

3. Restore the values from the properties file you copied to an outside directory if you want to preserve any
customization that you made to the default values and override the values introduced by the patch:
TDM_install_directory\Agent\agent.properties
4. Specify the broker URL and broker port number for communicating with the JMS bus.

Teradata Data Mover


60 Installation, Configuration, and Upgrade Guide for Customers Release 16.10
Chapter 4: Upgrading Software
Upgrading the Data Mover Agent on a Linux Teradata Server
The broker URL value is the machine name or IP address of the machine where ActiveMQ runs. The
broker port value should also be the same as the port number that ActiveMQ uses. The defaults are
broker.url=localhost and broker.port=61616.

Teradata Data Mover


Installation, Configuration, and Upgrade Guide for Customers Release 16.10 61
Chapter 4: Upgrading Software
Upgrading the Data Mover Agent on a Linux Teradata Server

Teradata Data Mover


62 Installation, Configuration, and Upgrade Guide for Customers Release 16.10

You might also like