0% found this document useful (0 votes)
702 views

4.SDI Administration Guide

Uploaded by

mangalore2k6763
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
702 views

4.SDI Administration Guide

Uploaded by

mangalore2k6763
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 230

PUBLIC

SAP HANA Smart Data Integration and SAP HANA Smart Data Quality 2.0 SP03
Document Version: 1.0 – 2020-02-12

Administration Guide
© 2020 SAP SE or an SAP affiliate company. All rights reserved.

THE BEST RUN


Content

1 Administration Guide for SAP HANA Smart Data Integration and SAP HANA Smart Data
Quality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench. . . . . . . . 8


2.1 Enable Statistics Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Access the Data Provisioning Monitors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Common Concepts and Controls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
2.4 Monitoring Agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
2.5 Monitoring Remote Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Monitoring Remote Subscriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Remote Subscription Statuses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 Monitoring Design-Time Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.8 Monitoring Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Change Retention Period for Data Provisioning Task Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.9 Configure User Settings Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
2.10 Create Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.11 Creating Monitoring Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

3 Monitoring Data Provisioning in SAP Web IDE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24


3.1 Access the Data Provisioning Monitors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Monitoring Data Provisioning Agents and Agent Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Information Available on the Data Provisioning Agent Monitors. . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Monitoring Remote Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
3.4 Monitoring Remote Subscriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Monitoring Data Provisioning Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Information Available on the Data Provisioning Task Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Administering Data Provisioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


4.1 Managing Agents and Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Manage Agents from the Data Provisioning Agent Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
Manage Adapters from the Data Provisioning Agent Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Measuring Agent Latency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Back Up the Data Provisioning Agent Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Uninstall the Data Provisioning Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Managing Agent Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Failover Behavior in an Agent Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Load Balancing in an Agent Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Create or Remove an Agent Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46

Administration Guide
2 PUBLIC Content
Manage Agent Nodes in an Agent Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Add Adapters to an Agent Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Configure Remote Sources in an Agent Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3 Managing Remote Sources and Subscriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Create a Remote Source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Suspend and Resume Remote Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Alter Remote Source Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Manage Remote Subscriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Processing Remote Source or Remote Subscription Exceptions. . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4 Managing Design Time Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Execute Flowgraphs and Replication Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Schedule Flowgraphs and Replication Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Stop Non-Realtime Flowgraph Executions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Start and Stop Data Provisioning Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Schedule Data Provisioning Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Executing Partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.5 Managing Enterprise Semantic Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Roles for Enterprise Semantic Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Enterprise Semantic Services Knowledge Graph and Publication Requests. . . . . . . . . . . . . . . . . 72
Publishing Artifacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Monitor the Status of Publication Requests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Manage Published Artifacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Data Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Setting Configuration Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Troubleshooting Enterprise Semantic Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5 Maintaining Connected Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94


5.1 Maintaining the SAP HANA System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Update the SAP HANA System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Takeover/Failback with SAP HANA System Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Failover with SAP HANA Scale-Out. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 Maintaining Source Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Restart the Source Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Change the Source Database User Password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Cleaning Up LogReader Archives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Recover from Missing LogReader Archives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Recover from LogReader Database Upgrades. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Change the Primary Archive Log Path During Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Maintain the Source Database Without Propagating Changes to SAP HANA. . . . . . . . . . . . . . . .104
Recover with Microsoft SQL Server Always On Failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
Recover with SAP HANA System Replication Failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106

Administration Guide
Content PUBLIC 3
6 Troubleshooting and Recovery Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.1 Troubleshooting Real-Time Replication Initial Queue Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Resolve User Privilege Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Resolve Remote Source Parameter Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Resolve Improper Source Database Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Resolve Improper Adapter Configurations on the Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Resolve Uncommitted Source Database Transactions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Resolve Log Reader Instance Port Conflicts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
Resolve Data Provisioning Server Timeouts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
Load Clustered and Pooled Table Metadata into SAP HANA. . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2 Recovering from Replication Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Check for Log Reader Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Recover from a Source Table DDL Schema Change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Recover from a Truncated Source Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Recover from Source Table and Replication Task Recreation. . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Recover from a Source and Target Data Mismatch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Recover from Data Inconsistencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Recover from an Agent Communication Issue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Resolve Stopped or Delayed Replication on Oracle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
Resolve Locked SAP HANA Source Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134
Reset the Remote Subscription. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Clear Remote Subscription Exceptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.3 Recovering from Crashes and Unplanned System Outages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Recover from an Index Server Crash. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Recover from a Data Provisioning Server Crash. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Recover from a Data Provisioning Agent JVM Crash. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Recover from an Unplanned Source Database Outage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Recover from an SAP ASE Adapter Factory Crash. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.4 Troubleshooting Data Provisioning Agent Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Data Provisioning Agent Log Files and Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Clean an Agent Started by the Root User. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Agent JVM Out of Memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Adapter Prefetch Times Out. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Agent Reports Errors When Stopping or Starting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Uninstalled Agent Reports Alerts or Exceptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Create an Agent System Dump. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Resolve Agent Parameters that Exceed JVM Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.5 Troubleshooting Other Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Activate Additional Trace Logging for the Data Provisioning Server. . . . . . . . . . . . . . . . . . . . . . 148
Resolve a Source and Target Data Mismatch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Configuring the Operation Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Administration Guide
4 PUBLIC Content
Ensure Workload Management and Resource Consumption. . . . . . . . . . . . . . . . . . . . . . . . . . . 152

7 SQL and System Views Reference for Smart Data Integration and Smart Data Quality. . . . . . 154
7.1 SQL Statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
ALTER ADAPTER Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
ALTER AGENT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
ALTER REMOTE SOURCE Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . 159
ALTER REMOTE SUBSCRIPTION Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . .163
CANCEL TASK Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
CREATE ADAPTER Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
CREATE AGENT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
CREATE AGENT GROUP Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . 170
CREATE AUDIT POLICY Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . .171
CREATE REMOTE SOURCE Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . 173
CREATE REMOTE SUBSCRIPTION Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . .174
CREATE VIRTUAL PROCEDURE Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . 179
DROP ADAPTER Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
DROP AGENT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
DROP AGENT GROUP Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . 183
DROP REMOTE SUBSCRIPTION Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . 184
GRANT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
PROCESS REMOTE SUBSCRIPTION EXCEPTION Statement [Smart Data Integration]. . . . . . . . 187
SESSION_CONTEXT Function [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
START TASK Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
7.2 System Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
ADAPTER_CAPABILITIES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . 195
ADAPTER_LOCATIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . 196
ADAPTERS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
AGENT_CONFIGURATION System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . 197
AGENT_GROUPS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
AGENTS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
M_AGENTS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
M_REMOTE_SOURCES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . 198
M_REMOTE_SUBSCRIPTION_COMPONENTS System View [Smart Data Integration]. . . . . . . . .199
M_REMOTE_SUBSCRIPTION_STATISTICS System View [Smart Data Integration]. . . . . . . . . . . 200
M_REMOTE_SUBSCRIPTIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . 200
M_SESSION_CONTEXT System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . 202
REMOTE_SOURCE_OBJECT_COLUMNS System View [Smart Data Integration]. . . . . . . . . . . . 203
REMOTE_SOURCE_ OBJECT_DESCRIPTIONS System View [Smart Data Integration]. . . . . . . . 203
REMOTE_SOURCE_OBJECTS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . 204
REMOTE_SOURCES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . .204
REMOTE_SUBSCRIPTION_EXCEPTIONS System View [Smart Data Integration]. . . . . . . . . . . . 205

Administration Guide
Content PUBLIC 5
REMOTE_SUBSCRIPTIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . 206
TASK_CLIENT_MAPPING System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . 206
TASK_COLUMN_DEFINITIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . 207
TASK_EXECUTIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . 207
TASK_LOCALIZATION System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . 208
TASK_OPERATIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . .209
TASK_OPERATIONS_EXECUTIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . 209
TASK_PARAMETERS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . 210
TASK_TABLE_DEFINITIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . .211
TASK_TABLE_RELATIONSHIPS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . 212
TASKS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
VIRTUAL_COLUMN_PROPERTIES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . 213
VIRTUAL_TABLE_PROPERTIES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . 214
BEST_RECORD_GROUP_MASTER_STATISTICS System View [Smart Data Quality]. . . . . . . . . . 214
BEST_RECORD_RESULTS System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . 215
BEST_RECORD_STRATEGIES System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . 216
CLEANSE_ADDRESS_RECORD_INFO System View [Smart Data Quality]. . . . . . . . . . . . . . . . . .217
CLEANSE_CHANGE_INFO System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . 218
CLEANSE_COMPONENT_INFO System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . 219
CLEANSE_INFO_CODES System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . 220
CLEANSE_STATISTICS System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
GEOCODE_INFO_CODES System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . 222
GEOCODE_STATISTICS System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . . 223
MATCH_GROUP_INFO System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
MATCH_RECORD_INFO System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . . 224
MATCH_SOURCE_STATISTICS System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . 225
MATCH_STATISTICS System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
MATCH_TRACING System View [Smart Data Quality]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

Administration Guide
6 PUBLIC Content
1 Administration Guide for SAP HANA
Smart Data Integration and SAP HANA
Smart Data Quality

This guide describes the common tasks and concepts necessary for the ongoing operation, administration,
and monitoring of SAP HANA smart data integration and SAP HANA smart data quality.

The following areas are covered:

● Monitoring
● Administration and maintenance tasks
● Troubleshooting and recovery operations

For information about the initial installation and configuration of SAP HANA smart data integration and SAP
HANA smart data quality, refer to the Installation and Configuration Guide for SAP HANA Smart Data
Integration and SAP HANA Smart Data Quality.

For information about administration of the overall SAP HANA system, refer to the SAP HANA Administration
Guide.

Related Information

Installation and Configuration Guide for SAP HANA Smart Data Integration and SAP HANA Smart Data Quality
SAP HANA Administration Guide

Administration Guide
Administration Guide for SAP HANA Smart Data Integration and SAP HANA Smart
Data Quality PUBLIC 7
2 Monitoring Data Provisioning in the SAP
HANA Web-based Development
Workbench

The Data Provisioning monitors are browser-based interfaces that let you monitor agents, remote sources and
subscriptions, design-time objects (flowgraphs and replication tasks), and tasks created in an SAP HANA
system.

Related Information

Enable Statistics Collection [page 8]


Access the Data Provisioning Monitors [page 10]
Common Concepts and Controls [page 11]
Monitoring Agents [page 12]
Monitoring Remote Sources [page 14]
Monitoring Remote Subscriptions [page 15]
Monitoring Design-Time Objects [page 17]
Monitoring Tasks [page 18]
Configure User Settings Profiles [page 20]
Create Notifications [page 21]
Creating Monitoring Alerts [page 22]

2.1 Enable Statistics Collection

For some versions of SAP HANA, to enable statistics collection for the monitors you must activate the
dpStatistics job.

Prerequisites

If you are using one of the following SAP HANA versions, enable statistics collection per the following
procedure. For more information, see the SAP HANA smart data integration Product Availability Matrix (PAM).

● SAP HANA 1.0 SPS 12


● SAP HANA 2.0 SPS 03: versions prior to 2.00.037.04
● SAP HANA 2.0 SPS 04: versions prior to 2.00.045

Administration Guide
8 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
The user must have the following roles or privileges.

Table 1: Roles and Privileges


Action Role or Privilege

Enable statistics collection Role: sap.hana.xs.admin.roles::JobSchedulerAdministrator

Execute privilege on the procedure: "SAP_HANA_IM_DP"."sap.hana.im.dp.moni­


tor.ds::LOG_DP_STATISTICS"

Context

Activate the dpStatistics job as follows.

Procedure

1. Launch the XS Job Admin Dashboard /sap/hana/xs/admin/jobs/.


2. For the sap.hana.im.dp.monitor.jobs::dpStatistics job, on the XS Job Details page /sap/hana/xs/admin/
jobs/#/package/sap.hana.im.dp.monitor.jobs/job/dpStatistics:
a. On the Configuration tab, enter the User and Password.
b. Select the Active check box
c. Select Save Job.

Related Information

Assign Roles and Privileges


SAP HANA smart data integration and all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 9
2.2 Access the Data Provisioning Monitors

In the SAP HANA Web-based Development Workbench Editor, there are several ways to access the Data
Provisioning monitors.

Prerequisites

● The user must have the following roles or privileges to use the data provisioning monitors.

Table 2: Roles and Privileges


Action Role or Privilege

Access the monitors Role: sap.hana.im.dp.monitor.roles::Monitoring

Perform administration tasks Role: sap.hana.im.dp.monitor.roles::Operations


from the monitors

● Your web browser must support the SAPUI5 library sap.m (for example, Microsoft Internet Explorer 9).
For more information about SAPUI5 browser support, see SAP Note 1716423 and the Product
Availability Matrix (PAM) for SAPUI5.

Context

You can view monitors in the following ways:

● For activated design-time objects (flowgraphs or replication tasks), in the SAP HANA Web-based
Development Workbench: Editor, select the object and click the Launch Monitoring Console icon.
● From within a monitor, you can switch to another monitor by selecting it from the Navigate to drop-down
box.
● Enter the URL address of each monitor directly into a web browser.

The following table describes the URLs for each monitor.

Monitor How to Access

Data Provisioning Agent Monitor <host name>:80<2 digit instance number>/sap/hana/im/dp/


monitor/index.html?view=DPAgentMonitor

Data Provisioning Remote Source <host name>:80<2 digit instance number>/sap/hana/im/dp/


Monitor monitor/index.html?view=DPRemoteSourceMonitor

Data Provisioning Subscription <host name>:80<2 digit instance number>/sap/hana/im/dp/


Monitor monitor/index.html?view=DPSubscriptionMonitor

Data Provisioning Task Monitor <host name>:80<2 digit instance number>/sap/hana/im/dp/


monitor/index.html?view=IMTaskMonitor

Administration Guide
10 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
Monitor How to Access

Data Provisioning Design Time Object <host name>:80<2 digit instance number>/sap/hana/im/dp/
Monitor monitor/index.html?view=IMDesignTimeObjectMonitor

Related Information

Grant Roles to Users

2.3 Common Concepts and Controls

The Data Provisioning monitors use several shared controls and processes.

Common controls for all monitors:

Control Description

Unit Select to change the unit for parameters such as byte size
for memory statistics or time frames for duration statistics.

Refresh At the top, monitor level, select to refresh the data on all
panes of the monitor, or you can refresh individual panes.

Auto Refresh: <x> seconds Select the check box to enable auto refresh. Selecting Auto
Refresh at the top, monitor level also selects the Auto
Refresh options for all panes in the monitor. You can change
the frequency for how often to refresh the pane(s).

After 15 minutes, this setting automatically clears.

Clear Filter You can apply filters on most columns or objects by selecting
the column heading or object and entering filter criteria.
Clear Filter removes the filters applied to that pane, or the
Clear Filter button at the top, monitor level clears all filters
for all panes in the monitor.

Navigate to: <monitor> Switch to another monitor by selecting it from the Navigate
to drop-down box.

Action History Select to display a list of recent user actions such as user
name, timestamp, action, and execution status of the action.
These actions are stored in the table
"SAP_HANA_IM_DP"."sap.hana.im.dp.moni­
tor.ds::DP_UI_ACTION_HISTORY". Users with the Operations
role can truncate (clear) this table.

Settings General settings for the monitor such as setting a sender


email address for notifications, collapsing panes, or manag­
ing user settings profiles.

The remote source and remote subscription statistic Collect Time Interval displays on the toolbar. By default,
statistics are collected every 5 minutes (300 seconds). You can change this interval to any value larger than 60

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 11
seconds (any value less than 60 will be treated as 60 seconds) by changing the dpserver.ini
collect_interval value as in the folowing SQL statement. Refresh your browser after changing the setting
to display the new value.

ALTER SYSTEM ALTER CONFIGURATION ('dpserver.ini', 'SYSTEM')


SET ('remote_source_statistics','collect_interval') = '<time in seconds (default
is 300)>'
WITH RECONFIGURE;

Statistics data older than 15 days is automatically deleted from the


"SAP_HANA_IM_DP"."sap.hana.im.dp.monitor.ds::DP_STATISTICS" table. If you want to change that value,
modify the value 15 in following SQL statement in the
"SAP_HANA_IM_DP"."sap.hana.im.dp.monitor.ds::LOG_DP_STATISTICS" procedure:

DELETE FROM "SAP_HANA_IM_DP"."sap.hana.im.dp.monitor.ds::DP_STATISTICS"


WHERE COLLECT_TIME < ADD_DAYS (CURRENT_TIMESTAMP, -15);

For any given pane, right-click a column heading the display column view controls:

● Sort Ascending: Sorts the table based on the selected column


● Sort Descending: Sorts the table based on the selected column
● Filter: Filters the table based on the selected column with text you enter
● Columns: Select the columns to show or hide in that pane

Related Information

Assign Roles and Privileges


Configure User Settings Profiles [page 20]

2.4 Monitoring Agents

The Data Provisioning Agent Monitor displays system information for all agents such as status, memory
details, adapter-agent mapping, and agent group information.

The Agent Monitor pane displays basic information such as the agent details, status, the last time it connected
to the Data Provisioning server, and memory usage details.

Additional controls for this pane include:

Control Description

Create Agent Creates a new agent; enter the following required parameters:

● Agent Name
● Host name
● Port

Administration Guide
12 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
Control Description

Alter Agent Lets you modify agent parameters (for example Host, Port, Enable SSL, Agent
Group) for the selected agent.

Drop Agent Removes the selected agent from the SAP HANA system.

Add Adapters Adds an adapter to the selected agent instance.

Update Adapters Refreshes all adapters registered for the selected agent so that any new
capabilities can be used by SAP HANA.

The following statuses are possible:

● CONNECTING
● DISCONNECTED
● CONNECTED

The Adapter Agent Mapping pane identifies the adapters that are associated with each agent instance. When
you first open the Data Provisioning Agent Monitor, it displays information for all agents and adapters.
Selecting an agent displays Adapter-Agent Mapping for that agent.

Additional controls for this pane include:

Control Description

Remove Location Removes an adapter from an agent instance.

Update Updates the selected adapter for the agent so that any new capabilities can be
used by SAP HANA.

The Agent Group pane lets you view, create, and manage agent groups.

Additional controls for this pane include:

Control Description

Create Creates a new agent group.

Drop Removes the selected agent group.

Add Agents Adds one or more agents to the selected group.

Related Information

Manage Agents from the Data Provisioning Agent Monitor [page 36]
Manage Adapters from the Data Provisioning Agent Monitor [page 38]
Create or Remove an Agent Group [page 46]
Manage Agent Nodes in an Agent Group [page 47]

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 13
2.5 Monitoring Remote Sources

The Data Provisioning Remote Source monitor lets you view each remote source's status and statistics.

The Remote Source Monitor pane displays basic information such as the adapter name, status, and
subscription information. Selecting a remote source displays the statistics for that source in the pane below.

Status can be one of the following values:

● OK
● ERROR
● SUSPENDED

Subscription information includes:

● Total number of subscriptions to this source


● The number of subscriptions In progress: Subscriptions for which the status is not APPLY_CHANGE_DATA,
CREATED or blank.
● The number of subscriptions Replicating changes: Subscriptions for which the status is
APPLY_CHANGE_DATA.

Select a remote source and choose Alter Remote Source to suspend or resume the capture or distribution of
the remote source.

The Remote Source Statistics pane displays details such as aggregated counts of records processed by
receivers and appliers. For aggregated values, the collected time interval is 5 minutes; refer to the statistic's
time stamp.

Some statistics have corresponding graph views. Select the statistic and choose Show Graph. For example, say
the Count of all, data and meta, records processed by Applier statistic shows a Statistic Value of 83300.
Selecting Graph shows a chart with the count value at each 5-minute interval over a selected time frame, for
example 6 hours (Plot values for past: 6 hours).The summation of all these values is the Statistic Value.

You can select a duration (Plot values for past: ) from 6 hours to 15 days, or change the Graph Type.

Scan delay and Receiver delay are calculated as follows.

Statistic Adapter type Description

Scan delay Oracle Log Reader Remote_Source_Current_Row_Timestamp -


Remote_Source_Processed_Row_Timestamp (dpagent,
(does not apply to trigger-based
adapter)
replication)
Both values are in UTC.

SAP HANA LATEST_SCANNED_TRANS_TIME_IN_TRIG_QUEUE -


LATEST_TRANS_TIME_IN_TRIG_QUEUE

Receiver delay Oracle Log Reader Last received timestamp (dpserver, receiver) -
Remote_Source_Processed_Row_Timestamp (dpagent,
(does not apply to trigger-based
adapter)
replication)

SAP HANA Last received timestamp (dpserver, receiver) -


LATEST_SENT_UTC_TIME_IN_APPLIER

Administration Guide
14 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
Statistic Adapter type Description

Last received timestamp value is the local server time


converted to UTC.

Related Information

Common Concepts and Controls [page 11]

2.6 Monitoring Remote Subscriptions

The Data Provisioning Remote Subscription Monitor provides information about how data is being replicated to
the Data Provisioning Server.

The Remote Subscription Monitor pane displays basic information such as the design-time information, status,
and subscription type. You can select a subscription name to view its description. Selecting a subscription
displays the statistics for that subscription in the pane below.

Additional controls for this pane include:

Control Description

Queue Initiates real-time data processing

Distribute Applies changes

Reset Resets the real-time process to start from the initial load again

Drop Removes the selected remote subscription.

Notifications Select to view or add email notifications (for example in the case of a warning or
error).

The Remote Subscription Statistics pane below displays the same information as in the Remote Source
Statistics pane.

Related Information

Remote Subscription Statuses [page 16]


Processing Remote Source or Remote Subscription Exceptions [page 61]
Suspend and Resume Remote Sources [page 57]
Manage Remote Subscriptions [page 59]
Common Concepts and Controls [page 11]
Monitoring Remote Sources [page 14]
Manage Remote Subscriptions [page 59]

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 15
Create Notifications [page 21]

2.6.1 Remote Subscription Statuses

You create remote subscriptions when you want to listen for real-time changes to data that you replicated into
the Data Provisioning Server. A remote subscription can have seven different statuses.

The following table describes the status names (in both the Data Provisioning Remote Subscription Monitor and
in the M_REMOTE_SUBSCRIPTIONS monitoring view) and their descriptions.

You can select a status entry to display the Replication Status Details window for information about queueing
and distribution statistics:

● Queue command: Adapter received begin marker from the Data Provisioning Server
● Queue command: Adapter applied begin marker to source
● Queue command: Adapter read begin marker from source and sent to the Data Provisioning Server
● Distribute command: Adapter received begin marker from the Data Provisioning Server
● Distribute command: Adapter applied begin marker to source
● Distribute command: Adapter read begin marker from source and sent to the Data Provisioning Server

Remote Subscription Mon­ M_REMOTE_SUBSCRIP­


itor TIONS Description

Created CREATED Remote subscription is created by the replication task or flow­


graph.

Subscribed, request to MAT_START_BEG_MARKE The receiver is waiting for the begin marker that indicates the
queue R first changed data to queue while the initial load is running.

Request to stop queuing MAT_START_END_MARKE The receiver queues the rows and is waiting for the end marker
and start distribution R that indicates the last row of the initial load.

Queuing changes MAT_COMP_BEG_MARKER The receiver has received the begin marker, change data is be­
ing queued, and the initial load can now start.

Queuing closed, starting MAT_COMP_END_MARKER The receiver queues the changed rows and is waiting for the
distribution end marker that indicates the last row of the initial load. The in­
itial load has completed and the end marker is sent to the
adapter. If the state doesn’t change to
AUTO_CORRECT_CHANGE_DATA, the adapter or source sys­
tem is slow in capturing the changes.

Applying queued changes AUTO_CORRECT_CHANGE When the end marker is received, the applier loads the
using auto-correct _DATA changed data captured (and queued during the initial load) to
the target.

If many changes occurred after the initial load started, this


state might take a long time to change to
APPLY_CHANGE_DATA.

Replicating changes APPLY_CHANGE_DATA All of the changes captured while the initial load was running
have completed and are now loaded to the target. The sub­
scription is now applying changes as they happen in the source
system.

Administration Guide
16 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
2.7 Monitoring Design-Time Objects

The Data Provisioning Design Time Object Monitor provides information about your design-time objects
(flowgraphs and replication tasks). For example, you can see the duration of a task execution for a flowgraph
and how many records have been processed.

The Design Time Objects pane displays basic information such as the target schema name and whether the
object has a table type input or variables. Selecting a design-time object displays task and remote subscription
information for that object in the panes below.

Additional controls for this pane include:

Control Description

Execute Executes the selected object.

Schedules Manage scheduling for the selected object.

Notifications Select to view or add email notifications (for example in the case of a warning or
error).

The Task Monitor pane displays task duration and number of processed records.

Additional controls for this pane include:

Control Description

Stop Stop execution of the selected task.

Remote Statements For a selected task, if the Remote Statements button is enabled, select it to view
the SQL remote statement string used for task execution.

Notifications Select to view or add email notifications (for example in the case of a warning or
error).

Selecting a task displays the Data Provisioning Task Monitor.

The Remote Subscription Monitor pane displays subscription status and other processing details. Select a
subscription name to launch the Data Provisioning Remote Subscription Monitor Select a status entry to display
a Details dialog for more details including marker information and exceptions.

Related Information

Execute Flowgraphs and Replication Tasks [page 62]


Schedule Flowgraphs and Replication Tasks [page 63]
Stop Non-Realtime Flowgraph Executions [page 64]
Execute Flowgraphs and Replication Tasks [page 62]
Create Notifications [page 21]
Monitoring Tasks [page 18]
Monitoring Remote Subscriptions [page 15]
Managing Design Time Objects [page 61]

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 17
2.8 Monitoring Tasks

The Data Provisioning Task Monitor provides you with information about your replication (hdbreptask) and
transformation (hdbflowgraph) tasks. For example, you can see the duration of a task execution and how many
records have been processed.

When you first open the Data Provisioning Task Monitor, it displays information for all tasks.

The Task Overview pane lists all tasks with information such as design time name (select to open the Design-
Time Objects monitor), create time, and memory size, for example. You can select a task to show information
for only that task in the panes below.

Additional controls for this pane include:

Control Description

Start Start execution of the selected task.

Schedules Manage scheduling for the selected object.

Notifications Select to view or add email notifications (for example in the case of a warning or
error).

The Task Execution Monitor pane lists all of the recent executions of the task(s) and associated data such as
the schema name, start time, duration, status, and number of processed records.

Additional controls for this pane include:

Control Description

Stop Stop execution of the selected task.

Remote Statements For a selected task, if the Remote Statements button is enabled, select it to view
the SQL remote statement string used for task execution.

Execute Remaining Partitions Execute partitions that didn't run or failed after executing a task.

The Task Operation Execution Monitor pane lists all of the recent operations of the task(s) and associated data
including the operation type, start time, duration, status, and number of processed records.

Related Information

Change Retention Period for Data Provisioning Task Monitor [page 19]
Start and Stop Data Provisioning Tasks [page 65]
Schedule Data Provisioning Tasks [page 66]
Executing Partitions [page 67]
Start and Stop Data Provisioning Tasks [page 65]
Schedule Flowgraphs and Replication Tasks [page 63]

Administration Guide
18 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
2.8.1 Change Retention Period for Data Provisioning Task
Monitor

Change the retention period for task statistics tables if you want to retain the data in the Task Execution
Monitor and Task Operation Execution Monitor for longer than 90 days.

Context

The following parameters specify how long to keep the statistics data and when to delete them:

● The task_data_retention_period parameter specifies the period of time the data remains in the
statistics tables, in seconds. This period is calculated from the time the task reached the COMPLETED,
FAILED, or CANCELLED status. The default value is 7776000, or 90 days. A value of 0 (zero) or -1 means
never delete the data.
● The task_data_retention_period_check_interval parameter specifies how often the data is
deleted by the garbage collection thread. The default value is 300 seconds (5 minutes).

To change the default values of these parameters, you must add a new section named task_framework to each
of the indexserver.ini, scriptserver.ini, and the xsengine.ini files.

To change these options in the SQL console:

ALTER SYSTEM ALTER CONFIGURATION ('<server type>.ini', 'SYSTEM')


SET ('task_framework', ' task_data_retention_period_check_interval’)='<in secs>'
WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('<server type>.ini', 'SYSTEM')
SET ('task_framework', ' task_data_retention_period’)='<in secs>'
WITH RECONFIGURE;

Procedure

1. Log in to SAP HANA studio as SYSTEM user.

2. In the Systems view, right-click the name of your SAP HANA server and choose Configuration and
Monitoring Open Administration .
3. Click the Configuration tab.
4. Right-click indexserver.ini and choose Add Section.
5. On the Section Name screen of the Add Section Wizard, enter task_framework for the section name, and
click Next.
6. On the Scope Selection screen, select System from the Assign Values todropdown list, and click Next.
7. On the Key Value Pairs screen, enter task_data_retention_period in the Key field, and enter the
number of seconds you want the statistics data to be kept in the statistics tables.

 Note

If you set the retention period to 0 (zero) or -1, statistics data is not deleted.

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 19
8. Click Finish.
9. Select indexserver.ini, right-click, and choose Add Parameter.
10. On the Add New Parameter screen, enter task_data_retention_period_check_interval in the Key
field, and enter the time interval for the Task Data Cleanup Process to run.

 Note

If you set the task_data_retention_period_check_interval to less than 60 seconds, the


default value (5 minutes) is used.

11. Repeat steps 4 through 10 for scriptserver.ini.


12. If your tasks are called by the XS application, select xsengine.ini and repeat steps 4 through 10.

2.9 Configure User Settings Profiles

By creating user settings profiles, you can quickly switch between different monitor layouts. Settings profiles
contain information about visible columns, column order, column width, column filters, table visibility, and
slider positions.

Context

You can create, modify, or remove settings profiles in each Data Provisioning Monitor by clicking the Settings
button.

Procedure

● To add a new settings profile, click Add.


a. Specify a name for the profile and whether to make it the default profile.
b. Click Add.
A new profile is created using the current layout and column display settings.
● To switch to an existing settings profile, select the profile and click Load.
The current layout and column display settings are updated from the settings saved in the profile.
● To modify an existing settings profile, select the profile and click Update.
a. If you want to make the profile the default profile, select Default.
b. Click Update.
The selected profile is updated with the current layout and column display settings.
● To remove an existing settings profile, select the profile and click Delete.
The selected profile is removed from the Profiles table.

Administration Guide
20 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
2.10 Create Notifications

Create e-mail notifications for various task, remote subscription, and design-time object statuses.

Prerequisites

The user must have the following roles or privileges to create status notifications:

Table 3: Roles and Privileges


Action Role or Privilege

Enable users to schedule task Role: sap.hana.xs.admin.roles::JobSchedulerAdministrator

Configure SMTP Role: sap.hana.xs.admin.roles::SMTPDestAdministrator

Create status notifications ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::NotificationAdministration

To activate status notifications, the following must occur:

● Scheduling must be enabled in the XS Job Admin Dashboard at http://<host>:<port>/sap/


hana/xs/admin/jobs.
The user that enables other users to schedule needs the role
sap.hana.xs.admin.roles::JobSchedulerAdministrator.
● To create notifications, the job sap.hana.im.dp.monitor.jobs/checkNotifications must be enabled in the XS
Job Details page: http://<host>:<port>/sap/hana/xs/admin/jobs/#/package/
sap.hana.im.dp.monitor.jobs/job/checkNotifications.
● A sender address must be configured in the monitor. If no sender address is configured, notification emails
won’t be sent.
To configure the sender address, click Settings Set sender email address in the monitor.
● The SAP HANA SMTP mail client must be correctly configured in the SMTP Configurations page at
http://<host>:<port>/sap/hana/xs/admin/index.html#smtp.
The user who configures the SMTP client needs the role sap.hana.xs.admin.roles::SMTPDestAdministrator

Context

You can create notifications for the following statuses:

Object Type Supported Statuses

Task execution COMPLETED

FAILED

CANCELLED

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 21
Object Type Supported Statuses

Remote subscription ERROR

WARNING

Design time object COMPLETED

FAILED

CANCELLED

ERROR

WARNING

Procedure

1. In the Task Overview, Remote Subscription Monitor, or Design Time Objects table, select the object for
which you want to create a notification.
2. Click the Notifications button.
The list of notifications for the object is displayed.
3. Click the Add button to create a new notification.
4. Specify a name, status conditions, and recipient e-mail addresses for the notification.
5. If you want to enable the notification immediately, select Is active.
6. Click Create Notification.
The new notification is added to the list of notifications for the object. When the conditions for the
notification are met, users in the recipient list are sent an e-mail containing details about the event that
triggered the notification.

Related Information

Assign Roles and Privileges

2.11 Creating Monitoring Alerts

Create monitoring alerts for various functions.

You can receive alerts for the following functions:

● Agent availability
● Agent memory usage
● Remote subscription exception

Administration Guide
22 PUBLIC Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
● Data Quality reference data
● Long-running tasks

You can configure monitoring alerts from the SAP HANA cockpit. For more information, see the SAP HANA
Administration Guide.

Related Information

Monitoring Alerts (SAP HANA Administration Guide)


SAP HANA Administration Guide: Configuring Alerts

Administration Guide
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench PUBLIC 23
3 Monitoring Data Provisioning in SAP Web
IDE

Monitors let you view status and other key performance indicators for agents, remotes sources, and tasks, for
example.

In the SAP HANA database explorer, you can view status and other monitoring information for agents, remote
sources and subscriptions, tasks, and design-time objects (flowgraphs and replication tasks) created in an SAP
HANA system.

Access the Data Provisioning Monitors [page 24]


In SAP Web IDE, you can view agents, remote sources and subscriptions, and tasks created in an SAP
HANA system from the SAP HANA cockpit or from within the SAP HANA database explorer.

Monitoring Data Provisioning Agents and Agent Groups [page 25]


For SAP Web IDE, within the SAP HANA database explorer you can monitor basic system information of
an agent such as status, memory, and the time it last connected with the Data Provisioning Server.

Monitoring Remote Sources [page 27]


Within SAP Web IDE, the SAP HANA database explorer provides information about your remote
sources.

Monitoring Remote Subscriptions [page 28]


Within SAP Web IDE, the SAP HANA database explorer provides information about your remote
subscriptions.

Monitoring Data Provisioning Tasks [page 30]


Within SAP Web IDE, the SAP HANA database explorer provides information about your replication
tasks and transformation tasks.

Related Information

3.1 Access the Data Provisioning Monitors

In SAP Web IDE, you can view agents, remote sources and subscriptions, and tasks created in an SAP HANA
system from the SAP HANA cockpit or from within the SAP HANA database explorer.

Prerequisites

Your web browser must support the SAPUI5 library sap.m (for example, Microsoft Internet Explorer 9).

Administration Guide
24 PUBLIC Monitoring Data Provisioning in SAP Web IDE
For more information about SAPUI5 browser support, see SAP Note 1716423 and the Product Availability
Matrix (PAM) for SAPUI5.

Context

You can view these monitors in the following ways:

● From the SAP HANA cockpit


● From within the SAP HANA database explorer

Procedure

● To view monitors from the SAP HANA cockpit:


a. Open the SAP HANA cockpit in your browser.
b. If necessary, enter your database user name and password.
c. Go to the smart data integration block of links in the System Monitor page.
● To view monitors from within the SAP HANA database explorer:

a. In SAP Web IDE, choose Tools Database Explorer .


b. Choose your system and expand the Catalog folder.
c. To view a list of objects, select a Catalog item and the list displays in the pane below. Select an object in
the pane to see a detail page.
d. To view monitoring data of a Category item, right-click the item and select Show <items>.

Task overview: Monitoring Data Provisioning in SAP Web IDE [page 24]

Related Information

Monitoring Data Provisioning Agents and Agent Groups [page 25]


Monitoring Remote Sources [page 27]
Monitoring Remote Subscriptions [page 28]
Monitoring Data Provisioning Tasks [page 30]

3.2 Monitoring Data Provisioning Agents and Agent Groups

For SAP Web IDE, within the SAP HANA database explorer you can monitor basic system information of an
agent such as status, memory, and the time it last connected with the Data Provisioning Server.

You can sort and hide individual columns by right-clicking a row and selecting your display preferences.

Administration Guide
Monitoring Data Provisioning in SAP Web IDE PUBLIC 25
Parent topic: Monitoring Data Provisioning in SAP Web IDE [page 24]

Related Information

Access the Data Provisioning Monitors [page 24]


Monitoring Remote Sources [page 27]
Monitoring Remote Subscriptions [page 28]
Monitoring Data Provisioning Tasks [page 30]

Information Available on the Data Provisioning Agent


Monitors

To see an overview of all agents, in the database explorer expand the database's Catalog folder, right-click
Agents, and select Show Agents.

Table 4: Information Available in the Agent Overview Table


Column Description

Agent Name Name of the Data Provisioning Agent.

Agent Host Name of the host on which the agent is running.

Agent Port Port that the agent uses to communicate with the Data Provisioning Server.

Agent Group Group to which the agent has been assigned

Status State of the agent.

The following states are possible:

● CONNECTING
● DISCONNECTED
● CONNECTED

Since Last Connect Elapsed time since the last connection from the Data Provisioning Server to the Data Provi­
sioning Agent.

Last Connect Time The last connect time from the Data Provisioning Server to the Data Provisioning Agent.

Adapters Number of adapters defined for this Data Provisioning Agent.

Protocol Type of network protocol used between the Data Provisioning Agent and the Data Provision­
ing Server.

The following protocols are possible:

● TCP
● HTTP

Used Memory Amount of memory currently used by the agent.

Administration Guide
26 PUBLIC Monitoring Data Provisioning in SAP Web IDE
Column Description

Used Swap Space Amount of swap space currently used by the agent.

Free Memory Amount of free memory on the agent host.

Free Swap Space Amount of free swap space on the agent host.

Is SSL Enabled Specifies whether the agent listening on TCP port uses SSL

To see an overview of agent groups, in the database explorer expand the database's Catalog folder, right-click
Agent Groups, and select Show Agent Groups.

Table 5: Information Available in the Agent Groups Table


Column Description

Agent Group Name of the Data Provisioning Agent group.

Agent Count Number of agents added to this Agent Group

Connected Agent Count Number of connected agents in this Agent Group

3.3 Monitoring Remote Sources

Within SAP Web IDE, the SAP HANA database explorer provides information about your remote sources.

Parent topic: Monitoring Data Provisioning in SAP Web IDE [page 24]

Related Information

Access the Data Provisioning Monitors [page 24]


Monitoring Data Provisioning Agents and Agent Groups [page 25]
Monitoring Remote Subscriptions [page 28]
Monitoring Data Provisioning Tasks [page 30]

Administration Guide
Monitoring Data Provisioning in SAP Web IDE PUBLIC 27
3.3.1 Information Available on the Data Provisioning Remote
Sources Monitor

To see an overview of all remote sources, in the database explorer expand the database's Catalog folder, right-
click Remote Sources, and select Show Remote Sources.

Table 6: Information Available in Remote Sources Overview Table


Column Description

Remote Source Name Name of the remote source.

Adapter Name Name of the adapter.

Location Port that the agent uses to communicate with the Data Provisioning Server.

Agent Name Name of the agent for this remote source.

Status The status of changed-data capture on this remote source.

The following status values are possible:

● OK
● ERROR
● SUSPENDED

Total Subscriptions Number of subscriptions defined for this remote source.

To see details for a remote source, in the database explorer expand the database's Catalog folder and select
Remote Sources. From the resulting list of tasks in the pane below, select a task.

The detail page displays information in four sections: Remote Objects, Dictionary, Statistics, and Exceptions.

You can click Edit to edit the propeties of the remote source.

3.4 Monitoring Remote Subscriptions

Within SAP Web IDE, the SAP HANA database explorer provides information about your remote subscriptions.

Parent topic: Monitoring Data Provisioning in SAP Web IDE [page 24]

Related Information

Access the Data Provisioning Monitors [page 24]


Monitoring Data Provisioning Agents and Agent Groups [page 25]
Monitoring Remote Sources [page 27]
Monitoring Data Provisioning Tasks [page 30]

Administration Guide
28 PUBLIC Monitoring Data Provisioning in SAP Web IDE
3.4.1 Information Available on the Data Provisioning Remote
Subscriptions Monitor

To see an overview of all remote subscriptions, in the database explorer expand the database's Catalog folder,
right-click Remote Subscriptions, and select Show Remote Subscriptions.

Remote Subscription Actions

You can start an ALTER REMOTE SUBSCRIPTION statement with these commands: Queue, Distribute, and
Reset. You can drop the subscription with the DROP REMOTE SUBSCRIPTION statement. Select one or more
remote subscriptions, and click one of the buttons.

Command Description

Queue Initiate real-time data processing. Typically, the initial load of data is preceded by the
Queue command.

Distribute Applies the changes after the initial load completes.

Reset Restarts the real-time process from the initial load.

Drop Removes the remote source.

Table 7: Information Available in Remote Subscriptions Table


Column Description

Subscription Name Name of the remote subscription.

Schema Name Name of the schema.

Remote Source Name Name of the remote source for which this subscription is defined.

Valid Whether or not the remote subscription is valid.

Subscription State Name of the state of the remote subscription. For more information, see Remote Subscrip­
tion Statuses [page 16].

Last Processed Elapsed time since the last changed data was processed.

Last Processed Transaction Time the last changed data was processed.
Time

Subscription Type Type of subscription. The following values are possible:

● TABLE
● VIRTUAL TABLE

Target Type Type of target.

The following values are possible:

● TABLE
● VIRTUAL TABLE

To see details for a remote subscription, in the database explorer expand the database's Catalog folder and
select Remote Subscriptions. From the resulting list of subscriptions in the pane below, select a subscription.

Administration Guide
Monitoring Data Provisioning in SAP Web IDE PUBLIC 29
Table 8: Information Available in Remote Subscription Statistics Table
Column Description

Schema Name Name of the schema (user name) in the remote source.

Remote Source Name Name of the remote source.

Subscription Name Name of the remote subscription.

Received Count Total number of messages received by the Data Provisioning Server.

Applied Count Total number of messages applied.

Received Size Total size of messages received by the Data Provisioning Server.

Applied Size Total size of messages applied.

Rejected Count Total number of messages rejected.

Since Last Message Received Time elapsed between now and when the last message was received.

Last Message Received Time Time the last message was received.

Since Last Message Applied Time elapsed between now and when the last message was applied.

Last Message Applied Time the last message was applied.

3.5 Monitoring Data Provisioning Tasks

Within SAP Web IDE, the SAP HANA database explorer provides information about your replication tasks and
transformation tasks.

You can sort and hide individual columns by right-clicking a row and selecting your display preferences. You can
also start and stop tasks.

Execute Tasks

From the Task Overview or the Task Details page, select a task that you want to run. You can select to run more
than one task on the Task Overview page. Click Execute.

Cancel Tasks

From the Task Execution table in the Task Details page, select a running task. You can select multiple tasks from
the Task Execution table. Click Cancel Execution.

Parent topic: Monitoring Data Provisioning in SAP Web IDE [page 24]

Administration Guide
30 PUBLIC Monitoring Data Provisioning in SAP Web IDE
Related Information

Access the Data Provisioning Monitors [page 24]


Monitoring Data Provisioning Agents and Agent Groups [page 25]
Monitoring Remote Sources [page 27]
Monitoring Remote Subscriptions [page 28]
Executing Partitions [page 67]

Information Available on the Data Provisioning Task Monitor

To see an overview of all tasks, in the database explorer expand the database's Catalog folder, right-click Tasks,
and select Show Tasks.

Table 9: Information Available in the Task Overview Table


Column Description

Comments Description of the task, from the task plan.

Create Time The time that the task was created.

Has Table Type Input TRUE if the task is modeled with a table type as input. This means data would need to be
passed (pushed) at execution time.

Has SDQ TRUE if the task contains smart data quality (SDQ) functionality.

Has Variables TRUE if the task contains variables.

Input Parameter Count Number of input (tableType) parameters.

Is read-only TRUE if the task is read only (has only table type outputs), FALSE if it writes to non-table-
type outputs.

Is valid TRUE if the task is in a valid state, FALSE if it has been invalidated by a dependency.

Memory Size Memory size of loaded task.

Plan Version Version of the task plan

Procedure Name If the task was created with a procedure instead of a plan, this attribute contains the name
of the stored procedure.

Procedure Schema If the task was created with a procedure instead of a plan, this attribute contains the
schema name of the stored procedure.

Realtime TRUE if the task is a real-time task, else FALSE.

Realtime Design Time Object TRUE if the task is a real-time design time object like a replication or flowgraph task,

Owner Name Owner of the task.

Output Parameter Count Number of output (tableType) parameters.

Schema Name Name of the schema in which the task was created.

SQL Security Security model for the task, either DEFINER or INVOKER.

Task ID Unique identifier for a task.

Administration Guide
Monitoring Data Provisioning in SAP Web IDE PUBLIC 31
Column Description

Task Name Name of the data provisioning task.

Task Type Type of task. Derived from task plan.

To see details for a task, in the database explorer expand the database's Catalog folder and select Tasks. From
the resulting list of tasks in the pane below, select a task.

The detail page displays information in three sections: Task Executions, Task Partitions, and Task Operations. If
you select a task in the first table, the other tables show information for only that task.

Use the row-count display to change the number of task executions to display at a time in each table. The
default is 500. After selecting a different task, refresh the tables to see all corresponding information.

Table 10: Information Available in Task Executions Table


Column Description

Task Name Name of the Data Provisioning task.

Schema Name Name of the schema in which the task was created.

Host Name of the host on which the task is running.

Port Port number that the task uses to communicate with the Data Provisioning Server.

Task Execution ID Unique identifier for the task.

Partition Count Number of logical partitions used for parallel processing in the task.

If no partitions are defined for the task, the partition count is 1.

 Tip
When working in Web-based Development Workbench, you can click the partition count
to display additional information about the partitions used in the task execution.

Start Time Day, date, and time when the task started.

End Time Day, date, and time when the task ended.

Duration Total elapsed time from start to end for COMPLETED or FAILED tasks.

Current elapsed time for RUNNING tasks.

Status Current status of the task.

The following values are possible:

● STARTING
● RUNNING
● COMPLETED
● FAILED
● CANCELLED (or, CANCELLING)

 Tip
When working in Web-based Development Workbench, you can click the status to dis­
play additional information about the task execution.

Administration Guide
32 PUBLIC Monitoring Data Provisioning in SAP Web IDE
Column Description

Total Progress Percentage completed.

Processed Records Number of records that the task has processed.

Async Indicates whether or not the task is running as a background task.

Possible values are:

● TRUE
● FALSE

Parameters Parameters passed to the task.

Has Remote Statements Indicates whether there is one or more remote statements.

Connection ID Connection identifier

Transaction ID Transaction identifier used for the task execution

HANA User Name of the user that started the execution of this task.

Application User Application user set in the session context.

Table 11: Information Available in Task Partitions Table


Column Description

Task Name Name of the Data Provisioning task.

Schema Name Name of the schema in which the task was created.

Task Execution ID Unique identifier for the task.

Partition Name Name of the partition.

Partition ID Identification number of the logical partition used for parallel processing within the task.

Start Time Day, date, and time when the task started.

End Time Day, date, and time when the task ended.

Duration Total elapsed time from start to end for COMPLETED or FAILED tasks.

Current elapsed time for RUNNING tasks.

Status Current status of the task.

The following values are possible:

● STARTING
● RUNNING
● COMPLETED
● FAILED
● CANCELLED (or, CANCELLING)

 Tip
When working in Web-based Development Workbench, you can click the status to dis­
play additional information about the task execution.

Total Progress Percentage completed.

Administration Guide
Monitoring Data Provisioning in SAP Web IDE PUBLIC 33
Column Description

Processed Records Number of records that the task has processed.

Has Remote Statements Indicates whether there is one or more remote statements.

Connection ID Connection identifier

Transaction ID Transaction identifier used for the task execution

Table 12: Information Available in Task Operations Table


Column Description

Operation Name of the operation that the task is currently performing.

For a transformation task, the name of the operation is the name of the node that appears in
the flowgraph.

Schema Name Name of the schema (user name) in the remote source.

Task Name Name of the task.

Task Execution ID Identification number of the task.

Partition ID Identification number of the logical partition used for parallel processing within the task.

Partition Name Name of the logical partition used for parallel processing within the task.

Operation Type Current type of operation. For example, the operation type can be Table Writer, Adapter,
Projection, and so forth.

Start Time Day, date, and time when the task started.

End Time Day, date, and time when the task ended.

Duration Total elapsed time from start to end.

Status Current status of the task.

The following values are possible:

● STARTING
● RUNNING
● COMPLETED
● FAILED

Progress Percentage completed.

Processed Records Number of records that the task has processed.

Side Effects Indicates whether or not this operation generates side effect statistics.

Possible values are:

● TRUE
● FALSE

Administration Guide
34 PUBLIC Monitoring Data Provisioning in SAP Web IDE
4 Administering Data Provisioning

This section describes common tasks related to the ongoing administration of SAP HANA smart data
integration and SAP HANA smart data quality.

Managing Agents and Adapters [page 35]


The Data Provisioning Agent provides secure connectivity between the SAP HANA database and your
on-premise sources. Adapters are registered on the agent and manage the connection to your source.

Managing Agent Groups [page 44]


Agent grouping provides failover and load-balancing capabilities by combining individual Data
Provisioning Agents installed on separate host systems.

Managing Remote Sources and Subscriptions [page 53]


Remote sources establish the connection between a data provisioning adapter and your source
system. Remote subscriptions monitor a remote source for real-time changes to data replicated into
the Data Provisioning Server.

Managing Design Time Objects [page 61]


Design time objects such as flowgraphs and replication tasks manage the replication and
transformation of data in SAP HANA smart data integration and SAP HANA smart data quality.

Managing Enterprise Semantic Services [page 70]


Use the SAP HANA Enterprise Semantic Services Administration browser-based application to
administer and monitor artifacts for semantic services.

Related Information

4.1 Managing Agents and Adapters


The Data Provisioning Agent provides secure connectivity between the SAP HANA database and your on-
premise sources. Adapters are registered on the agent and manage the connection to your source.

Manage Agents from the Data Provisioning Agent Monitor [page 36]
Use the Data Provisioning Agent Monitor to perform basic administration tasks such as registering,
altering, or dropping Data Provisioning Agents.

Manage Adapters from the Data Provisioning Agent Monitor [page 38]
Use the Data Provisioning Agent Monitor to perform basic administration tasks, such as adding
adapters to or removing adapters from a Data Provisioning Agent instance.

Measuring Agent Latency [page 39]


Use the LATENCY MONITORING remote source capability to monitor latency between the Data
Provisioning Agent and a source database.

Administration Guide
Administering Data Provisioning PUBLIC 35
Back Up the Data Provisioning Agent Configuration [page 42]
You can back up your Data Provisioning Agent configuration by copying key static configuration files to
a secure location.

Uninstall the Data Provisioning Agent [page 42]


Uninstall the Data Provisioning Agent from a host system using the uninstallation manager.

Parent topic: Administering Data Provisioning [page 35]

Related Information

Managing Agent Groups [page 44]


Managing Remote Sources and Subscriptions [page 53]
Managing Design Time Objects [page 61]
Managing Enterprise Semantic Services [page 70]

4.1.1 Manage Agents from the Data Provisioning Agent


Monitor

Use the Data Provisioning Agent Monitor to perform basic administration tasks such as registering, altering, or
dropping Data Provisioning Agents.

Prerequisites

The user must have the following roles or privileges to manage agents:

Table 13: Roles and Privileges

Action Role or Privilege

Add Data Provisioning Agent ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::CreateAgent
● System privilege: AGENT ADMIN

Alter Data Provisioning Agent ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::AlterAgent
● System privilege: AGENT ADMIN

Remove Data Provisioning Agent ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::DropAgent
● System privilege: AGENT ADMIN

Administration Guide
36 PUBLIC Administering Data Provisioning
Context

Use the following controls in the Agent Monitor table to perform an action.

Procedure

● Select Create Agent to register a new agent with the SAP HANA system.
a. Specify the name of the agent and relevant connection information.
b. If the agent uses a secure SSL connection, select Enable SSL.
c. If you want to assign the agent to an existing agent group, select the group under Agent Group.
d. Click Create Agent.

The new agent appears in the Agent Monitor table.


● Select Alter Agent to make connection configuration changes on an agent already registered in the SAP
HANA system.
a. Specify the new connection information for the agent. You can’t change the name or connection
protocol for an existing agent.
b. If the agent uses a secure SSL connection, check Enable SSL.
c. If you want to assign the agent to an existing agent group, select the group under Agent Group.
d. Click Alter Agent.

The updated agent information appears in the Agent Monitor table.


● Select Drop Agent to remove an agent from the SAP HANA system.
a. To drop any dependent objects automatically, such as registered adapters, choose CASCADE option.
You can’t remove an agent while it has dependent objects such as registered adapters. Remove the
adapters from the agent manually, or check CASCADE option.
b. Click Drop Agent.

The agent is removed from the Agent Monitor table. If the agent was assigned to an agent group, it’s also
removed from the agent group.

Related Information

ALTER AGENT Statement [Smart Data Integration] [page 158]


CREATE AGENT Statement [Smart Data Integration] [page 168]
CREATE AGENT GROUP Statement [Smart Data Integration] [page 170]
DROP AGENT Statement [Smart Data Integration] [page 182]
DROP AGENT GROUP Statement [Smart Data Integration] [page 183]
Assign Roles and Privileges

Administration Guide
Administering Data Provisioning PUBLIC 37
4.1.2 Manage Adapters from the Data Provisioning Agent
Monitor

Use the Data Provisioning Agent Monitor to perform basic administration tasks, such as adding adapters to or
removing adapters from a Data Provisioning Agent instance.

Prerequisites

The user must have the following roles or privileges to manage adapters:

Table 14: Roles and Privileges

Action Role or Privilege

Add adapter ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::AddLocationToAdapter
● System privilege: ADAPTER ADMIN

Remove adapter ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::RemoveLocationFromAdapter
● System privilege: ADAPTER ADMIN

Update adapters ● Role: sap.hana.im.dp.monitor.roles::Operations


● System privilege: ADAPTER ADMIN

Context

Use the buttons in the Agent Monitor and Agent Adapter Mapping tables to perform an action.

Procedure

● To add adapters to an agent instance, select the agent and click Add Adapters in the Agent Monitor table.
a. Select the desired adapters from the list of adapters deployed on the agent instance.
b. Click Add Adapters.

The selected adapters appear in the Adapter Agent Mapping table.


● To remove an adapter from an agent instance, select the adapter and click Remove Location in the Adapter
Agent Mapping table.
a. If the adapter is registered on only one agent instance, you can remove it with CASCADE option.
b. Click Remove Location.

The adapter is removed from the Adapter Agent Mapping table.


● To update all adapters for an agent, select the agent and click Update Adapters in the Agent Monitor.

Administration Guide
38 PUBLIC Administering Data Provisioning
All adapters registered for the selected agent are refreshed, and any new capabilities can be used by SAP
HANA.
● To update a single adapter, select the adapter and click Update in the Adapter Agent Mapping table.
The selected adapter is refreshed, and any new capabilities can be used by SAP HANA.

Related Information

CREATE ADAPTER Statement [Smart Data Integration] [page 166]


DROP ADAPTER Statement [Smart Data Integration] [page 181]
Assign Roles and Privileges

4.1.3 Measuring Agent Latency

Use the LATENCY MONITORING remote source capability to monitor latency between the Data Provisioning
Agent and a source database.

 Restriction

Latency monitoring tracks only the general latency between the agent and the source database. There is
currently no capability for tracking specific rows from source to target.

When latency monitoring is enabled, a ticket is written to the source database and then detected and
processed by the adapter. Information about how and when each component processes the ticket is stored in
the M_REMOTE_SOURCE_LATENCY_HISTORY view in the target SAP HANA database.

Each of the following steps is timestamped for tracking:

1. At fixed intervals, SAP HANA sends a request to the adapter to create a latency ticket.
2. Adapter creates and records the ticket in the source database.
3. Adapter reads the ticket and sends it back to the server.
4. Server records that it received the returned ticket and each server subcomponent timestamps the ticket
until it reaches the applier.
The agent latency is the difference between the timestamps for when the ticket is written to the database
and when the ticket is executed at the primary database.

Observe trends in the M_REMOTE_SOURCE_LATENCY_HISTORY and


M_REMOTE_SOURCE_LATENCY_STATUS views rather than looking at momentary snapshots.

For example, consider whether the agent is falling behind or catching up? If it’s catching up and latency is
decreasing, performance may be sufficient and no additional tuning may be needed. If it’s falling behind and
latency is increasing, consider tuning your remote source.

Creating Latency Tickets

You can create latency tickets either as a one-time operation or continuously at a set interval.

Administration Guide
Administering Data Provisioning PUBLIC 39
● Create a single latency ticket:

ALTER REMOTE SOURCE <remote_source_name> START LATENCY MONITORING


'<ticket_name>';

● Create latency tickets automatically at set intervals:

ALTER REMOTE SOURCE <remote_source_name> START LATENCY MONITORING


'<ticket_name>' INTERVAL <interval_in_second>;
ALTER REMOTE SOURCE <remote_source_name> STOP LATENCY MONITORING
'<ticket_name>';

● Clear the history for a latency ticket:

ALTER REMOTE SOURCE <remote_source_name> CLEAR LATENCY HISTORY


'<ticket_name>';

Monitoring Latency Tickets

Monitor agent latency by accessing views in the SAP HANA database.

● Monitor latency tickets:

SELECT * FROM M_REMOTE_SOURCE_LATENCY_HISTORY;

● Check latency status:

SELECT * FROM M_REMOTE_SOURCE_LATENCY_STATUS;

Interpreting the M_REMOTE_SOURCE_LATENCY_HISTORY View

Latency ticket timestamps are recorded following an order of operations that depends on whether the adapter
is a Log Reader adapter or a trigger-based adapter.

Table 15: Operations for Log Reader Adapters


Operation Type Description

1 SDB and/or DDL Adapter SDB: Time when the ticket enters the database and is time­
stamped

DDL: Time when the ticket is created in the source

2 LRI Adapter Time when the adapter's inbound picks up the ticket

3 LRO Adapter Time when the adapter's outbound picks up the ticket

4 SNDR Adapter Time when the Data Provisioning Agent sender forwards
the ticket to the Data Provisioning Server (receiver)

5 FRAMEWORK_RECEIVE Server A single ticket entering the Data Provisioning Server's


D adapter framework module

Administration Guide
40 PUBLIC Administering Data Provisioning
Operation Type Description

6 FRAMEWORK_SENT Server A single ticket exiting the Data Provisioning Server's


adapter framework module

7 RECEIVER_RECEIVED Server A single ticket entering the Data Provisioning Server's re­
ceiver module

8 RECEIVER_SENT Server A single ticket exiting the Data Provisioning Server's re­
ceiver module

9 DISTRIBUTOR_RECEI Server Ticket timestamped at the Data Provisioning Server's dis­


VED tributor module

10 APPLIER_RECEIVED Server A single ticket entering the Data Provisioning Server's ap­
plier module

11 APPLIER_APPLIED Server A single ticket exiting the Data Provisioning Server's applier
module

Table 16: Operations for Trigger-Based Adapters


Operation Type Description

1 SDB and/or DDL Adapter SDB: Time when the ticket enters the database and is time­
stamped

DDL: Time when the ticket is created in the source

2 SCANNER Adapter Time when the adapter's scanner picks up the ticket from
source database

3 SNDR Adapter Time when the Data Provisioning Agent sender forwards
the ticket to the Data Provisioning Server (receiver)

4 FRAMEWORK_RECEIVE Server A single ticket entering the Data Provisioning Server's


D adapter framework module

5 FRAMEWORK_SENT Server A single ticket exiting the Data Provisioning Server's


adapter framework module

6 RECEIVER_RECEIVED Server A single ticket entering the Data Provisioning Server's re­
ceiver module

7 RECEIVER_SENT Server A single ticket exiting the Data Provisioning Server's re­
ceiver module

8 DISTRIBUTOR_RECEI Server Ticket timestamped at the Data Provisioning Server's dis­


VED tributor module

9 APPLIER_RECEIVED Server A single ticket entering the Data Provisioning Server's ap­
plier module

10 APPLIER_APPLIED Server A single ticket exiting the Data Provisioning Server's applier
module

Administration Guide
Administering Data Provisioning PUBLIC 41
4.1.4 Back Up the Data Provisioning Agent Configuration

You can back up your Data Provisioning Agent configuration by copying key static configuration files to a secure
location. You can use this backup to restore communication between the SAP HANA server and the Data
Provisioning Agent.

 Note

This backup can be restored only to an agent host with the same fully qualified domain name as the original
agent. You can’t use the backup to transport configuration settings between agents with different fully
qualified domain names.

For example, you can’t use a backup from an agent on <host1>.mydomain.com to restore settings to an
agent on <host2>.mydomain.com.

 Restriction

Changed-data capture status information for Log Reader adapters can’t be backed up and restored.

Unless specified, all files and directories that you need to back up are located under <DPAgent_root>:

● dpagent.ini
● dpagentconfig.ini
● sec
● secure_storage
● ssl/cacerts
● configuration/com.sap.hana.dp.adapterframework
● lib/
● camel/
● LogReader/config
● LogReader/sybfilter/system/<platform>/LogPath.cfg

4.1.5 Uninstall the Data Provisioning Agent

Uninstall the Data Provisioning Agent from a host system using the uninstallation manager.

Context

The uninstallation manager supports graphical and command-line modes on Windows and Linux platforms.

Administration Guide
42 PUBLIC Administering Data Provisioning
Procedure

● Uninstall the agent from a Windows host in graphical mode.


Call the uninstallation manager from the Control Panel:

Programs and Features SAP HANA Data Provisioning Agent Uninstall


● Uninstall the agent from a Windows host in command-line mode.
a. Navigate to the <DPAgent_root>/install directory.

For example, C:\usr\sap\dataprovagent\install.


b. Call the installation manager.
hdbuninst.exe --path "<DPAgent_root>"
● Uninstall the agent from a Linux host in graphical or command-line mode.
a. Navigate to the <DPAgent_root>/install directory.

For example, /usr/sap/dataprovagent/install.


b. Call the uninstallation manager.
○ For graphical mode: ./hdbuninst --main
SDB::Install::App::Gui::Uninstallation::main --path "<DPAgent_root>"
○ For command-line mode: ./hdbuninst --path "<DPAgent_root>"

 Tip

To ensure that all installation entries are removed correctly, use the same user and privileges as the
original installation owner.

For example, if sudo was used during the original installation, log in as the installation owner and
run sudo ./hdbuninst <...>.

Results

The Data Provisioning Agent is uninstalled from the system.

Next Steps

After uninstalling the agent, several files and directories generated by the agent during runtime are left in place.
If you choose, you can safely remove these remaining files and directories manually.

Remove the following files and directories from <DPAgent_root>:

● configTool/
● configuration/
● install/
● log/

Administration Guide
Administering Data Provisioning PUBLIC 43
● LogReader/
● workspace/

4.2 Managing Agent Groups


Agent grouping provides failover and load-balancing capabilities by combining individual Data Provisioning
Agents installed on separate host systems.

 Restriction

Failover is not supported for initial and batch load requests. Restart the initial load following a failure due to
agent unavailability.

 Restriction

Load balancing is supported only for initial loads. It is not supported for changed-data capture (CDC)
operations.

Planning considerations

Before configuring agents in a group, review the following considerations and limitations:

● For real-time replication failover, each agent in a group must be installed on a different host system.
● All agents in a group must have identical adapter configurations.
● All agents in a group must use the same communication protocol. You cannot mix on-premise agents
(TCP) and cloud-based agents (HTTP) in a single group.

Failover Behavior in an Agent Group [page 45]


When an agent node in an agent group is inaccessible for longer than the configured heartbeat interval,
the Data Provisioning Server chooses a new active agent within the group. It then resumes replication
for any remote subscriptions active on the original agent.

Load Balancing in an Agent Group [page 46]


With multiple agents in an agent group, you can choose to have the agent for the initial loads selected
randomly, selected from the list of agents in a round-robin fashion, or not load balanced.

Create or Remove an Agent Group [page 46]


You can create an agent group or remove an existing group in the Data Provisioning Agent Monitor.

Manage Agent Nodes in an Agent Group [page 47]


You can manage the agent nodes that belong to an agent group in the Data Provisioning Agent Monitor.

Add Adapters to an Agent Group [page 49]


Before you can create remote sources in an agent group, you must add adapters to the group in the
SAP HANA Web-based Development Workbench.

Configure Remote Sources in an Agent Group [page 50]


To receive the benefits of failover from an agent group, you must configure your remote sources in the
agent group.

Administration Guide
44 PUBLIC Administering Data Provisioning
Parent topic: Administering Data Provisioning [page 35]

Related Information

Managing Agents and Adapters [page 35]


Managing Remote Sources and Subscriptions [page 53]
Managing Design Time Objects [page 61]
Managing Enterprise Semantic Services [page 70]

4.2.1 Failover Behavior in an Agent Group

When an agent node in an agent group is inaccessible for longer than the configured heartbeat interval, the
Data Provisioning Server chooses a new active agent within the group. It then resumes replication for any
remote subscriptions active on the original agent.

Initial and batch load requests to a remote source configured on the agent group are routed to the first
available agent in the group.

 Restriction

Failover is not supported for initial and batch load requests. Restart the initial load following a failure due to
agent unavailability.

Although no user action is required for automatic failover within an agent group, you may choose to monitor
the current agent node information.

● To query the current master agent node name for a remote source:

SELECT AGENT_NAME FROM "SYS"."M_REMOTE_SOURCES_" WHERE "REMOTE_SOURCE_OID" =


(SELECT REMOTE_SOURCE_OID FROM "SYS"."REMOTE_SOURCES_" WHERE
REMOTE_SOURCE_NAME = '<remote_source_name>');

● To query a list of all agent and agent group names:

SELECT AGENT_NAME,AGENT_GROUP_NAME FROM SYS."AGENTS";

 Caution

If all nodes in an agent group are down, replication cannot continue and must be recovered after one or
more agent nodes are available.

Restarting Agent Nodes in an Agent Group

Restarting nodes in an agent group does not impact active replication tasks.

Administration Guide
Administering Data Provisioning PUBLIC 45
For the master agent node, stopping or restarting the agent triggers the agent group failover behavior and a
new active master node is selected.

4.2.2 Load Balancing in an Agent Group

With multiple agents in an agent group, you can choose to have the agent for the initial loads selected
randomly, selected from the list of agents in a round-robin fashion, or not load balanced.

 Note

Agent grouping provides load balancing for initial loads only. Load balancing is not supported for changed-
data capture (CDC) operations.

Load balancing is governed by the 'agent_group'.'load_balance_mode' index server parameter and


supports the following modes:

● none: No load balancing is performed.


● random: The agent is chosen randomly.
● round_robin: The chosen agent is the next in the list of available agents after the previously chosen agent.

For example, to select the agent for initial loads randomly:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET


('agent_group', 'load_balance_mode') = 'random' WITH RECONFIGURE;

4.2.3 Create or Remove an Agent Group

You can create an agent group or remove an existing group in the Data Provisioning Agent Monitor.

Prerequisites

The user who creates or removes the agent group must have the following roles or privileges:

Table 17: Roles and Privileges


Action Role or Privilege

Create agent group ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::CreateAgentGroup
● System privilege: AGENT ADMIN

Remove agent group ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::DropAgentGroup
● System privilege: AGENT ADMIN

Administration Guide
46 PUBLIC Administering Data Provisioning
Context

Use the buttons in the Agent Group table to create or remove an agent group.

Procedure

● Click Create to create an agent group.


Specify the name for the new agent group, and click Create Agent Group.
The new agent group appears in the Agent Group table.
● Select the agent group and click Drop to remove an existing agent group.

 Note

When you remove an agent group, any agent nodes for the group are removed from the group first.
Agents cannot be removed from the group if there are active remote subscriptions.

Any agent nodes are removed from the group, and the group is removed from the Agent Group table.

Related Information

CREATE AGENT GROUP Statement [Smart Data Integration] [page 170]


DROP AGENT GROUP Statement [Smart Data Integration] [page 183]
CREATE AGENT GROUP Statement [Smart Data Integration] [page 170]
DROP AGENT GROUP Statement [Smart Data Integration] [page 183]

4.2.4 Manage Agent Nodes in an Agent Group

You can manage the agent nodes that belong to an agent group in the Data Provisioning Agent Monitor.

Prerequisites

The user must have the following roles or privileges to manage agent nodes:

Administration Guide
Administering Data Provisioning PUBLIC 47
Table 18: Roles and Privileges
Action Role or Privilege

Create agent ● Role: sap.hana.im.dp.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::CreateAgent
● System privilege: AGENT ADMIN

Add agent to agent group ● Role: sap.hana.im.dp.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::AlterAgent
● System privilege: AGENT ADMIN

Remove agent from agent group ● Role: sap.hana.im.dp.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::AlterAgent
● System privilege: AGENT ADMIN

Context

Use the buttons in the Agent Monitor and Agent Group tables to perform the action.

 Tip

Select an agent group in the Agent Group table to display its nodes in the Agent Monitor table.

Procedure

● To register a new agent with the SAP HANA system and add it to an existing agent group, click Create
Agent.
When specifying the parameters for the agent, select the agent group from the Agent Group list.

The new agent appears in the Agent Monitor table.


● To modify the group assignment for an existing agent, click Alter Agent.

○ Select the new agent group from the Agent Group list.
If you are assigning the agent to a different group, select the empty entry for Enable SSL to avoid
connection issues when the group is changed.
○ To remove the agent from an agent group, select the empty entry from the Agent Group list.

The group for the agent is displayed in the Agent Monitor table.
● To add multiple existing agents to an agent group, select the group in the Agent Group table and click Add
Agents.
a. Select the agents that you want to add to the group.
b. Click Add Agents.
The selected agents are assigned to the agent group and all associated entries in the Agent Monitor and
Agent Group tables are updated.

Administration Guide
48 PUBLIC Administering Data Provisioning
Related Information

CREATE AGENT Statement [Smart Data Integration] [page 168]


ALTER AGENT Statement [Smart Data Integration] [page 158]
Manage Agents from the Data Provisioning Agent Monitor
CREATE AGENT Statement [Smart Data Integration] [page 168]
ALTER AGENT Statement [Smart Data Integration] [page 158]

4.2.5 Add Adapters to an Agent Group

Before you can create remote sources in an agent group, you must add adapters to the group in the SAP HANA
Web-based Development Workbench.

Prerequisites

The user who adds an adapter must have the following roles or privileges:

Table 19: Roles and Privileges


Action Role or Privilege

Add adapter to agent group System privilege: ADAPTER ADMIN

Procedure

1. Open the SQL console in the SAP HANA Web-based Development Workbench.
2. If you do not know the agent names, query the system for a list of agents and agent groups.

SELECT AGENT_NAME,AGENT_GROUP_NAME FROM SYS."AGENTS";

3. Create the adapter on the first agent node.

CREATE ADAPTER "<adapter_name>" AT location agent "<agent1_name>";

4. Add the agent to each additional agent node in the agent group.

ALTER ADAPTER "<adapter_name>" ADD location agent "<agent#_name>";

Related Information

CREATE ADAPTER Statement [Smart Data Integration] [page 166]

Administration Guide
Administering Data Provisioning PUBLIC 49
ALTER ADAPTER Statement [Smart Data Integration] [page 156]

4.2.6 Configure Remote Sources in an Agent Group

To receive the benefits of failover from an agent group, you must configure your remote sources in the agent
group.

Related Information

CREATE REMOTE SOURCE Statement [Smart Data Integration] [page 173]


ALTER REMOTE SOURCE Statement [Smart Data Integration] [page 159]

Configure Remote Sources in the Web-based Development


Workbench

Procedure

● To create a new remote source in an agent group:

a. In the Catalog editor, right-click the Provisioning Remote Sources folder, and choose New
Remote Source.
b. Enter the required configuration information for the remote source, including the adapter name.
c. In the Location dropdown, choose agent group, and select the agent group name.
d. Click Save.
● To add an existing remote source to an agent group:

a. In the Catalog editor, select the remote source in the Provisioning Remote Sources folder.
b. In the Location dropdown, choose agent group, and select the agent group name.
c. Click Save.

Related Information

Create a Remote Source in the Web-Based Development Workbench [page 54]

Administration Guide
50 PUBLIC Administering Data Provisioning
Configure Remote Sources in the SQL Console

Procedure

1. Open the SQL console in the SAP HANA studio or Web-based Development Workbench.
2. Execute the CREATE or ALTER REMOTE SOURCE statement in the SQL console.

○ To create a new remote source in the group:

CREATE REMOTE SOURCE <source_name> ADAPTER <adapter_name> AT LOCATION


AGENT GROUP <group_name> <configuration_clause> <credential_clause>

○ To add an existing remote source to the group:

ALTER REMOTE SOURCE <source_name> ADAPTER <adapter_name> AT LOCATION AGENT


GROUP <group_name> <configuration_clause> <credential_clause>

 Note

If you are changing only the location for the remote source, you can omit the ADAPTER and
CONFIGURATION clauses:

ALTER REMOTE SOURCE <source_name> AT LOCATION AGENT GROUP <group_name>


<credential_clause>

Related Information

CREATE REMOTE SOURCE Statement [Smart Data Integration] [page 173]


ALTER REMOTE SOURCE Statement [Smart Data Integration] [page 159]

Alter Remote Source Clauses

When you use ALTER REMOTE SOURCE to modify a remote source, you must specify the configuration and
credential details as XML strings.

Example Credential Clause

WITH CREDENTIAL TYPE 'PASSWORD' USING '<CredentialEntry name="credential">


<user><username></user>
<password><password></password>
</CredentialEntry>'

Administration Guide
Administering Data Provisioning PUBLIC 51
Example Configuration Clause

CONFIGURATION '<?xml version="1.0" encoding="UTF-8"?>


<ConnectionProperties name="configurations">
<PropertyGroup name="generic">
<PropertyEntry name="map_char_types_to_unicode">false</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="database">
<PropertyEntry name="cdb_enabled">false</PropertyEntry>
<PropertyEntry name="pds_use_tnsnames">false</PropertyEntry>
<PropertyEntry name="pds_host_name"><db_hostname></PropertyEntry>
<PropertyEntry name="pds_port_number">1521</PropertyEntry>
<PropertyEntry name="pds_database_name">ORCL</PropertyEntry>
<PropertyEntry name="cdb_service_name"></PropertyEntry>
<PropertyEntry name="pds_service_name"></PropertyEntry>
<PropertyEntry name="pds_tns_filename"></PropertyEntry>
<PropertyEntry name="pds_tns_connection"></PropertyEntry>
<PropertyEntry name="cdb_tns_connection"></PropertyEntry>
<PropertyEntry name="_pds_tns_connection_with_cdb_enabled"></
PropertyEntry>
<PropertyEntry name="pds_byte_order"></PropertyEntry>
</PropertyGroup>
<PropertyGroup name="schema_alias_replacements">
<PropertyEntry name="schema_alias"></PropertyEntry>
<PropertyEntry name="schema_alias_replacement"></PropertyEntry>
</PropertyGroup>
<PropertyGroup name="security">
<PropertyEntry name="pds_use_ssl">false</PropertyEntry>
<PropertyEntry name="pds_ssl_sc_dn"></PropertyEntry>
<PropertyEntry name="_enable_ssl_client_auth">false</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="jdbc_flags">
<PropertyEntry name="remarksReporting">false</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="cdc">
<PropertyGroup name="databaseconf">
<PropertyEntry name="pdb_archive_path"></PropertyEntry>
<PropertyEntry name="pdb_supplemental_logging_level">table</
PropertyEntry>
</PropertyGroup>
<PropertyGroup name="parallelscan">
<PropertyEntry name="lr_parallel_scan">false</PropertyEntry>
<PropertyEntry name="lr_parallel_scanner_count"></PropertyEntry>
<PropertyEntry name="lr_parallel_scan_queue_size"></
PropertyEntry>
<PropertyEntry name="lr_parallel_scan_range"></PropertyEntry>
</PropertyGroup>
<PropertyGroup name="logreader">
<PropertyEntry name="skip_lr_errors">false</PropertyEntry>
<PropertyEntry name="lr_max_op_queue_size">1000</PropertyEntry>
<PropertyEntry name="lr_max_scan_queue_size">1000</PropertyEntry>
<PropertyEntry name="lr_max_session_cache_size">1000</
PropertyEntry>
<PropertyEntry name="scan_fetch_size">10</PropertyEntry>
<PropertyEntry name="pdb_dflt_column_repl">true</PropertyEntry>
<PropertyEntry name="pdb_ignore_unsupported_anydata">false</
PropertyEntry>
<PropertyEntry name="pds_sql_connection_pool_size">15</
PropertyEntry>
<PropertyEntry name="pds_retry_count">5</PropertyEntry>
<PropertyEntry name="pds_retry_timeout">10</PropertyEntry>
</PropertyGroup>
</PropertyGroup>
</ConnectionProperties>'

Administration Guide
52 PUBLIC Administering Data Provisioning
 Note

You cannot change user names while the remote source is suspended.

4.3 Managing Remote Sources and Subscriptions

Remote sources establish the connection between a data provisioning adapter and your source system.
Remote subscriptions monitor a remote source for real-time changes to data replicated into the Data
Provisioning Server.

Remote sources are generally created by an administrator, and can then be used for remote subscriptions in
replication tasks and flowgraphs created by a data provisioning modeler.

Create a Remote Source [page 54]


Using SAP HANA smart data integration, you set up an adapter that can connect to your source
database, then create a remote source to establish the connection.

Suspend and Resume Remote Sources [page 57]


You can suspend and resume capture and distribution for remote sources within the Data Provisioning
Remote Subscription Monitor.

Alter Remote Source Parameters [page 59]


You can modify some remote source parameters while the remote source is suspended.

Manage Remote Subscriptions [page 59]


You can drop, queue, distribute, and reset remote subscriptions within the Data Provisioning Remote
Subscription Monitor.

Processing Remote Source or Remote Subscription Exceptions [page 61]


Evaluate and resolve remote subscription exceptions.

Parent topic: Administering Data Provisioning [page 35]

Related Information

Managing Agents and Adapters [page 35]


Managing Agent Groups [page 44]
Managing Design Time Objects [page 61]
Managing Enterprise Semantic Services [page 70]

Administration Guide
Administering Data Provisioning PUBLIC 53
4.3.1 Create a Remote Source

Using SAP HANA smart data integration, you set up an adapter that can connect to your source database, then
create a remote source to establish the connection.

Prerequisites

● The user who creates the remote source must have the following roles or privileges:

Table 20: Roles and Privileges


Action Role or Privilege

Create a remote source System privilege: CREATE REMOTE SOURCE

● The Data Provisioning Server must be enabled.


● The Data Provisioning Agent must be installed and configured.
● The adapter must be configured and registered with SAP HANA.

Context

You can create a remote source in more than one way.

Related Information

Create a Remote Source in the Web-Based Development Workbench [page 54]


Create a Remote Source in the SQL Console [page 55]
Create Credentials for a Secondary User [page 57]

4.3.1.1 Create a Remote Source in the Web-Based


Development Workbench

In SAP HANA smart data integration, you can create a remote source with the Web-based Development
Workbench user interface.

Prerequisites

The user who creates the remote source must have the following roles or privileges:

Administration Guide
54 PUBLIC Administering Data Provisioning
Table 21: Roles and Privileges
Action Role or Privilege

Create a remote source ● System privilege: CREATE REMOTE SOURCE

Procedure

1. In the Web-based Development Workbench Catalog editor, expand the Provisioning node.
2. Right-click the Remote Sources folder and choose New Remote Source.
3. Enter the required information including the adapter and Data Provisioning Agent names.
Regarding user credentials, observe the following requirements:
○ A remote source created with a secondary user can be used only for querying virtual tables.
○ If the remote source is used for designing a .hdbreptask or .hdbflowgraph enabled for real time,
use technical user.
○ If you create a remote subscription using the CREATE REMOTE SUBSCRIPTION SQL statement, use
technical user.
4. Select Save.

Related Information

Configure Data Provisioning Adapters


CREATE REMOTE SOURCE Statement [Smart Data Integration] (SAP HANA SQL and System Views
Reference) [page 173]

4.3.1.2 Create a Remote Source in the SQL Console

In SAP HANA smart data integration, you can create a remote source using the SQL console.

Prerequisites

The user who creates the remote source must have the following roles or privileges:

Table 22: Roles and Privileges


Action Role or Privilege

Create a remote source ● System privilege: CREATE REMOTE SOURCE

Administration Guide
Administering Data Provisioning PUBLIC 55
Context

To create a remote source using the SQL console, you must know the connection information for your source.
For an existing remote source, the connection information is in an XML string in the CONFIGURATION
statement.

For your adapter, refer to the remote source configuration topic for that adapter in this guide to see its sample
SQL code. Change the variables to the correct values for your remote source.

The example at the end of this topic illustrates the basic CONFIGURATION connection information XML string
for a Microsoft SQL Server adapter.

After you create the remote source:

● If you’ve recently updated the Data Provisioning Agent, the connection information XML string could also
have been updated for your adapter. Therefore, refresh the adapter to get up-to-date connection
information.
● To view the connection information for an existing remote source, execute SELECT * FROM
"PUBLIC"."REMOTE_SOURCES". In the resulting view, look in the CONNECTION_INFO column.

 Tip

To ensure you can view the entire XML string in the CONNECTION_INFO column, in your SAP HANA
preferences enable the setting Enable zoom of LOB columns.

● To view all of the configuration parameters for a given adapter type, execute SELECT * FROM
"PUBLIC"."ADAPTERS". In the resulting view, look in the CONFIGURATION column. This information can
be useful if you want to, for example, determine the PropertyEntry name for a given parameter in the user
interface, shown as displayName. For example:

<PropertyEntry name="pds_database_name" displayName="Database


Name"><database_name></PropertyEntry>
<PropertyEntry name="pdb_dcmode" displayName="Database Data Capture
Mode">MSCDC</PropertyEntry>

Example

CREATE REMOTE SOURCE "MySQLServerSource" ADAPTER "MssqlLogReaderAdapter" AT


LOCATION AGENT "MyAgent"
CONFIGURATION
'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ConnectionProperties name="configurations">
<PropertyGroup name="data_type_conversion" displayName="Data Type Conversion">
<PropertyEntry name="map_char_types_to_unicode" displayName="Always Map
Character Types to Unicode">false</PropertyEntry>
<PropertyEntry name="map_time_to_timestamp" displayName="Map SQL Server Data
Type Time to Timestamp">true</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="database" displayName="Database">
<PropertyEntry name="pds_server_name" displayName="Host">myserver.sap.corp</
PropertyEntry>
<PropertyEntry name="pds_port_number" displayName="Port Number">1433</
PropertyEntry>
<PropertyEntry name="pds_database_name" displayName="Database Name">mydb</
PropertyEntry>

Administration Guide
56 PUBLIC Administering Data Provisioning
</PropertyGroup>
<PropertyGroup name="cdc" displayName="CDC Properties">
<PropertyEntry name="pdb_dcmode" displayName="Database Data Capture
Mode">MSCDC</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="logreader" displayName="LogReader">
<PropertyEntry name="skip_lr_errors" displayName="Ignore log record
processing errors">false</PropertyEntry>
</PropertyGroup>
</ConnectionProperties>
'WITH CREDENTIAL TYPE 'PASSWORD' USING
'<CredentialEntry name="credential">
<user>myuser</user>
<password>mypassword</password>
</CredentialEntry>'

Related Information

Configure Data Provisioning Adapters


Update the Data Provisioning Agent
CREATE REMOTE SOURCE Statement [Smart Data Integration] [page 173]

4.3.1.3 Create Credentials for a Secondary User

The syntax for creating secondary user credentials for SAP HANA smart data integration adapters is different
from the syntax for SAP HANA system adapters.

The syntax for creating secondary user credentials for SAP HANA smart data integration adapters is as follows.

create credential for user <user_name> component 'SAPHANAFEDERATION'


purpose <remote_source_name> type 'PASSWORD' using
<CredentialEntry name="credential">
<user><user_name></user>
<password><password></password>
</CredentialEntry>

4.3.2 Suspend and Resume Remote Sources

You can suspend and resume capture and distribution for remote sources within the Data Provisioning Remote
Subscription Monitor.

Prerequisites

The user must have the following roles or privileges to suspend and resume capture and distribution:

Administration Guide
Administering Data Provisioning PUBLIC 57
Table 23: Roles and Privileges

Action Role or Privilege

Suspend capture or distribution ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::AlterRemoteSource
● Object privilege: ALTER on the remote source

Resume capture or distribution ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::AlterRemoteSource
● Object privilege: ALTER on the remote source

Context

Use the Alter Remote Source button in the monitor to perform the action.

Procedure

1. Select the remote source in the Remote Source Monitor table and click Alter Remote Source.
2. Click Suspend or Resume for CAPTURE or DISTRIBUTION.
Confirmation of the action is displayed in the status console.
3. Close the Alter Remote Source dialog.

Results

Capture or distribution for the selected remote source is suspended or resumed.

Related Information

ALTER REMOTE SOURCE Statement [Smart Data Integration] [page 159]


Assign Roles and Privileges

Administration Guide
58 PUBLIC Administering Data Provisioning
4.3.3 Alter Remote Source Parameters

You can modify some remote source parameters while the remote source is suspended.

Context

In the Installation and Configuration Guide for SAP HANA Smart Data Integration and SAP HANA Smart Data
Quality, see each adapter's remote source description topic regarding which parameters you can modify when
a remote source is suspended.

 Note

You can’t change the User Name parameter when the remote source is suspended.

Procedure

1. In the Data Provisioning Remote Subscription Monitor, suspend capture on the remote source.
2. In the SAP HANA Web-based Development Workbench catalog, change the intended remote source
parameters.
3. Re-enter the credentials for the remote source and save the changes.
4. Resume capture on the remote source.

Related Information

Configure Data Provisioning Adapters

4.3.4 Manage Remote Subscriptions

You can drop, queue, distribute, and reset remote subscriptions within the Data Provisioning Remote
Subscription Monitor.

Prerequisites

The user must have the following roles or privileges to manage remote subscriptions:

Administration Guide
Administering Data Provisioning PUBLIC 59
Table 24: Roles and Privileges

Action Role or Privilege

Reset, queue, or distribute remote ● Role: sap.hana.im.dp.monitor.roles::Operations


subscription ● Application privilege: sap.hana.im.dp.monitor::AlterRemoteSubscription
● Object privilege: ALTER on the remote subscription

Drop remote subscription ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::DropRemoteSubscription
● Object privilege: DROP on the remote subscription

Context

Use the buttons in the Remote Subscription Monitor table to perform the action.

Procedure

1. Select the remote subscription in the Remote Subscription Monitor table.


2. Click Queue, Distribute, Reset, or Drop.

 Note

A warning appears if you attempt to drop a remote subscription that is used by any flowgraphs or
replication tasks. Click Drop if you want to continue and drop the remote subscription anyway.

Results

The remote subscription is queued, distributed, or reset. If you drop a remote subscription, the subscription is
removed from the Remote Subscription Monitor table.

Related Information

ALTER REMOTE SUBSCRIPTION Statement [Smart Data Integration] [page 163]


CREATE REMOTE SUBSCRIPTION Statement [Smart Data Integration] [page 174]
DROP REMOTE SUBSCRIPTION Statement [Smart Data Integration] [page 184]
Assign Roles and Privileges

Administration Guide
60 PUBLIC Administering Data Provisioning
4.3.5 Processing Remote Source or Remote Subscription
Exceptions

Evaluate and resolve remote subscription exceptions.

If an error occurs or if the row count on the target table doesn’t match, for example, look at the Exceptions
table and process the entries.

Use the following syntax:

PROCESS REMOTE SUBSCRIPTION EXCEPTION <exception_oid> IGNORE|RETRY;

To process a remote source or remote subscription exception using the monitoring UI:

1. Click the status of the remote source or remote subscription.


2. Select the error.
3. Click either Retry Operations or Ignore Error.

Related Information

PROCESS REMOTE SUBSCRIPTION EXCEPTION Statement [Smart Data Integration] [page 187]

4.4 Managing Design Time Objects

Design time objects such as flowgraphs and replication tasks manage the replication and transformation of
data in SAP HANA smart data integration and SAP HANA smart data quality.

Execute Flowgraphs and Replication Tasks [page 62]


You can execute design time objects including flowgraphs and replication tasks from the Data
Provisioning Design Time Object Monitor.

Schedule Flowgraphs and Replication Tasks [page 63]


You can schedule design time objects including flowgraphs and replication tasks within the Data
Provisioning Design Time Object Monitor.

Stop Non-Realtime Flowgraph Executions [page 64]


You can stop the execution of non-realtime flowgraphs within the Data Provisioning Design Time Object
Monitor.

Start and Stop Data Provisioning Tasks [page 65]


You can start and stop tasks within the Data Provisioning Task Monitor.

Schedule Data Provisioning Tasks [page 66]


You can schedule tasks within the Data Provisioning Task Monitor.

Executing Partitions [page 67]


How to execute partitions in a task.

Administration Guide
Administering Data Provisioning PUBLIC 61
Parent topic: Administering Data Provisioning [page 35]

Related Information

Managing Agents and Adapters [page 35]


Managing Agent Groups [page 44]
Managing Remote Sources and Subscriptions [page 53]
Managing Enterprise Semantic Services [page 70]

4.4.1 Execute Flowgraphs and Replication Tasks

You can execute design time objects including flowgraphs and replication tasks from the Data Provisioning
Design Time Object Monitor.

 Restriction

Real-time flowgraphs and replication tasks can’t be executed from the Data Provisioning Design Time
Object Monitor.

Prerequisites

The user must have the following roles or privileges to schedule flowgraphs and replication tasks.

Table 25: Roles and Privileges

Action Role or Privilege

Execute flowgraph or replication ● Role: sap.hana.im.dp.monitor.roles::Operations


task ● Application privilege: sap.hana.im.dp.monitor::ExecuteDesignTimeObject
● Object privilege: EXECUTE on the object schema
● Object privilege: Any additional object privileges needed within the task (for exam­
ple, ALTER, CREATE ANY, DELETE, DROP, EXECUTE, INDEX, INSERT, and so on.)

Procedure

1. Select the flowgraph or replication task in the Design Time Objects table.
2. Click Execute.
a. If the object uses table type parameters, select the tables to use when executing the object.
b. If the object uses variable parameters, specify the values to use when executing the object.

Administration Guide
62 PUBLIC Administering Data Provisioning
c. Click Execute.

Results

The object execution begins and the task appears in the Task Monitor table.

Related Information

START TASK Statement [Smart Data Integration] [page 189]


Assign Roles and Privileges

4.4.2 Schedule Flowgraphs and Replication Tasks

You can schedule design time objects including flowgraphs and replication tasks within the Data Provisioning
Design Time Object Monitor.

 Restriction

Real-time flowgraphs and replication tasks can’t be scheduled from the Data Provisioning Design Time
Object Monitor.

Prerequisites

The user must have the following roles or privileges to schedule flowgraphs and replication tasks.

Table 26: Roles and Privileges


Action Role or Privilege

Enable users to schedule design Role: sap.hana.xs.admin.roles::JobSchedulerAdministrator


time objects

Schedule flowgraph or replication ● Role: sap.hana.im.dp.monitor.roles::Operations


task ● Application privilege: sap.hana.im.dp.monitor::ScheduleDesignTimeObject

To activate task scheduling, the following must occur:

● Enable scheduling via XS Job Admin Dashboard /sap/hana/xs/admin/jobs/ (The user who enables other
users to schedule needs the role sap.hana.xs.admin.roles::JobSchedulerAdministrator).
● To schedule design time objects, the job sap.hana.im.dp.monitor.jobs::scheduleTask needs to be enabled in
the XS Job Details page: /sap/hana/xs/admin/jobs/#/package/sap.hana.im.dp.monitor.jobs/job/
scheduleTask.

Administration Guide
Administering Data Provisioning PUBLIC 63
Procedure

1. Select the flowgraph or replication task in the Design Time Objects table.
2. Click the Schedules button.
The Schedules dialog appears.
3. To create a new schedule for the task, click Add.
a. Select the frequency (once or recurring), interval if recurring (year, month, week, day, hour, minute,
second), and the time (local, not server time or UTC) for the object execution.
b. If the object uses table type parameters, select the tables to use when executing the object.
c. If the object uses variable parameters, specify the values to use when executing the object.
d. Click Schedule.
The new schedule is added to the list of schedules for the object.
4. To remove an existing schedule, select the schedule and click Delete.
The schedule is removed from the list of schedules for the object.
5. Close the Schedules dialog.

Results

Your object executes as scheduled and you can monitor the results of each execution of the object.

Related Information

Assign Roles and Privileges

4.4.3 Stop Non-Realtime Flowgraph Executions

You can stop the execution of non-realtime flowgraphs within the Data Provisioning Design Time Object Monitor.

Prerequisites

The user must have the following roles or privileges to stop flowgraph execution.

Administration Guide
64 PUBLIC Administering Data Provisioning
Table 27: Roles and Privileges

Action Role or Privilege

Stop flowgraph execution ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::StopTask

Procedure

1. Select the task for the flowgraph in the Task Monitor table.
2. Click Stop.

Results

The selected flowgraph execution instance is stopped.

Related Information

CANCEL TASK Statement [Smart Data Integration] [page 164]


START TASK Statement [Smart Data Integration] [page 189]

4.4.4 Start and Stop Data Provisioning Tasks

You can start and stop tasks within the Data Provisioning Task Monitor.

Prerequisites

The user must have the following privileges to start or stop tasks:

Table 28: Privileges

Action Privilege

Start task sap.hana.im.dp.monitor::StartTask

Stop task sap.hana.im.dp.monitor::StopTask

Administration Guide
Administering Data Provisioning PUBLIC 65
Procedure

● To start a task, select a task in the Task Overview table.


a. Click Start.
b. If the object uses table type parameters, select the tables to use when executing the object.
c. If the object uses variable parameters, specify the values to use when executing the object.

 Note

Tasks that belong to real-time design time objects can’t be started or scheduled from the Data
Provisioning Task Monitor.

● To stop a task, select the running task in the Task Execution Monitor table and click Stop.

Note that there might be a delay in stopping the task depending on when the cancellation was initiated and
the pending operation.

Related Information

CANCEL TASK Statement [Smart Data Integration] [page 164]


START TASK Statement [Smart Data Integration] [page 189]
Assign Roles and Privileges

4.4.5 Schedule Data Provisioning Tasks

You can schedule tasks within the Data Provisioning Task Monitor.

Prerequisites

The user must have the following roles or privileges to schedule tasks.

Table 29: Roles and Privileges

Action Role or Privilege

Enable users to schedule task Role: sap.hana.xs.admin.roles::JobSchedulerAdministrator

Schedule task ● Role: sap.hana.im.dp.monitor.roles::Operations


● Application privilege: sap.hana.im.dp.monitor::ScheduleTask

To activate task scheduling, the following must occur:

● Enable scheduling via XS Job Admin Dashboard /sap/hana/xs/admin/jobs/ (The user that enables
other users to schedule needs the role sap.hana.xs.admin.roles::JobSchedulerAdministrator).

Administration Guide
66 PUBLIC Administering Data Provisioning
● To schedule tasks, the Job sap.hana.im.dp.monitor.jobs::scheduleTask needs to be enabled in the XS Job
Details page: /sap/hana/xs/admin/jobs/#/package/sap.hana.im.dp.monitor.jobs/job/
scheduleTask.

Procedure

1. Select the task in the Task Monitor.


2. Click the Schedules button.
The Schedules dialog appears.
3. To create a new schedule for the task, click Add.
a. Select the frequency (once or recurring), interval if recurring (year, month, week, day, hour, minute),
and the time (local, not server time or UTC) for the task execution.
b. If the task uses table type parameters, select the tables to use when executing the task.
c. If the task uses variable parameters, specify the values to use when executing the task.
d. Click Schedule.
The new schedule is added to the list of schedules for the task.
4. To remove an existing schedule, select the schedule and click Delete.
The schedule is removed from the list of schedules for the task.
5. Close the Schedules dialog.

Results

Your task executes as scheduled and you can monitor the results of each execution of the task.

Related Information

Assign Roles and Privileges

4.4.6 Executing Partitions

How to execute partitions in a task.

In the Data Provisioning Task Monitor, you can view and execute partitions in the following ways:

● Execute a specific partition.


● Execute partitions that didn't run or failed after executing a task.
● Execute a specific failed partition.

Administration Guide
Administering Data Provisioning PUBLIC 67
Related Information

Execute a Specific Partition [page 68]


Execute Remaining Partitions [page 69]
Execute a Failed Partition [page 69]
Partition Data in a Replication Task
Partitioning Data in the Flowgraph

4.4.6.1 Execute a Specific Partition

How to execute a specific partition in a task.

Context

In the Data Provisioning Task Monitor, you can view and execute specific partitions.

Procedure

1. In the Task Overview table, select a task and select Start.

If there are partitions (or table type parameters) configured for the flowgraph or reptask, the Set
Parameters for task <name> displays.
2. Select the partition to execute and select Start.

If no partitions are selected, all partitions execute.


3. Select Start.

Example

You can execute a specific partition using the following query:

START TASK <task_name> ('_task_execute_partition'=>'<partition_name>');

Administration Guide
68 PUBLIC Administering Data Provisioning
4.4.6.2 Execute Remaining Partitions

How to execute all partitions that didn't run or failed.

Context

After executing a task, in the Data Provisioning Task Monitor you can execute all the partitions that didn't run or
failed.

Procedure

1. Correct any issues that caused failures.


2. In the Task Execution Monitor table, select a failed task execution.
3. Select Execute Remaining Partitions.

Example

You can execute remaining partitions using the following query:

START TASK <task_name>


('_task_execute_remaining_partitions_since_execution_id'=>'<execution_id>');

4.4.6.3 Execute a Failed Partition

How to execute a specific failed partition in a task.

Context

After executing a task, in the Data Provisioning Task Monitor you can execute a specific partition that failed.

Procedure

1. Correct any issues that caused failures.

Administration Guide
Administering Data Provisioning PUBLIC 69
2. In the Task Execution Monitor table, for a failed task execution, select the Partition Count.

The Task Partition Execution Details for Task <name> displays.


3. In the Task Partition Execution Monitor table, select the failed partition to execute.
4. Select Execute Failed Partition.

Example

You can execute a specific failed partition using the following query:

START TASK <task_name> ('_task_execute_partition'=>'<partition_name>');

4.5 Managing Enterprise Semantic Services

Use the SAP HANA Enterprise Semantic Services Administration browser-based application to administer and
monitor artifacts for semantic services.

To launch the SAP HANA Enterprise Semantic Services Administration tool, enter the following URL in a web
browser:

http://<<your_HANA_instance:port>>/sap/hana/im/ess/ui

The interface includes the following components (tiles):

Component Description

Publication Schedules Publish and unpublish artifacts. Schedule publishing and


data profiling requests.

Publication Requests View the details and status of all requests.

Published Artifacts View and remove artifacts from the knowledge graph.

Data Profiling Blacklist Prevent data profiling for selected artifacts.

When you have drilled in to a component, you can click the navigation menu in the upper-left corner to open
other components or return to the Home page.

Parent topic: Administering Data Provisioning [page 35]

Related Information

Roles for Enterprise Semantic Services [page 71]


Enterprise Semantic Services Knowledge Graph and Publication Requests [page 72]
Publishing Artifacts [page 72]

Administration Guide
70 PUBLIC Administering Data Provisioning
Monitor the Status of Publication Requests [page 75]
Manage Published Artifacts [page 78]
Data Profiling [page 81]
Setting Configuration Parameters [page 83]
Troubleshooting Enterprise Semantic Services [page 84]
Managing Agents and Adapters [page 35]
Managing Agent Groups [page 44]
Managing Remote Sources and Subscriptions [page 53]
Managing Design Time Objects [page 61]
About SAP HANA Enterprise Semantic Services

4.5.1 Roles for Enterprise Semantic Services

SAP HANA role requirements for Enterprise Semantic Services (ESS).

The following database roles control access to Enterprise Semantic Services.

Description Role

To use the SAP HANA ESS Administration tool sap.hana.im.ess.roles::Administrator

To update the SAP HANA ESS configuration sap.hana.im.ess.roles::Configurator

To use the remote view (sap.hana.im.ess.services.views:REMOTE_OBJECTS) sap.hana.im.ess.roles::DataSteward

or the lineage table functions

(sap.hana.im.ess.services.views.datalineage:GET_ALL_IMPACTING_TABLES,

sap.hana.im.ess.services.views.datalineage:GET_IMPACTING_TABLES,

sap.hana.im.ess.services.views.datalineage:GET_LINEAGE_FROM_VIEW,

sap.hana.im.ess.services.views.datalineage:GET_LINE­
AGE_FROM_SCHEMA)

To use the publishing APIs sap.hana.im.ess.roles::Publisher

To use the search, ctid API, or the remote view (sap.hana.im.ess.serv­ sap.hana.im.ess.roles::User
ices.views:REMOTE_OBJECTS)

or the secured data lineage table functions

(sap.hana.im.ess.services.views.datalineage:GET_ACCESSIBLE_LINE­
AGE_FROM_VIEW,

sap.hana.im.ess.services.views.datalineage:GET_ACCESSIBLE_LINEAGE)

Administration Guide
Administering Data Provisioning PUBLIC 71
4.5.2 Enterprise Semantic Services Knowledge Graph and
Publication Requests

Enterprise Semantic Services enables searching and profiling datasets.

Enterprise Semantic Services uses a knowledge graph that describes the semantics of the datasets that are
available to users or applications connected to SAP HANA. It is natively stored in the SAP HANA database.

Datasets represented in the knowledge graph can include tables, SQL views, SAP HANA views, remote objects
in remote sources, and virtual tables that refer to remote objects.

An Enterprise Semantic Services publication request extracts information from a resource and publishes it in
the knowledge graph. When a user searches for an object based on its metadata and contents, the knowledge
graph provides the results.

The knowledge graph becomes populated by one or more of the following methods:

● An SAP HANA administrator uses the Enterprise Semantic Services Administration tool to publish
datasets
● An SAP HANA administrator configures the Enterprise Semantic Services REST API so that an application
can publish datasets
● If an application has already been configured to call the Enterprise Semantic Services REST API, the
application can populate the knowledge graph. For example in SAP HANA Agile Data Preparation, when you
add a worksheet, the content publishes to the knowledge graph.

Related Information

Enabling Enterprise Semantic Services


Managing Enterprise Semantic Services [page 70]
SAP HANA Enterprise Semantic Services JavaScript API Reference
SAP HANA Enterprise Semantic Services REST API Reference

4.5.3 Publishing Artifacts

The SAP Enterprise Semantic Services (ESS) Administration tool lets you publish (or unpublish) artifacts.

You can publish or unpublish an artifact programmatically using the on-demand Enterprise Semantic Services
API. This method is useful for applications that manage the life cycle of their artifacts, that is, applications that
create, delete, and update SAP HANA artifacts. The application determines which artifacts to publish,
republish, or unpublish to the Enterprise Semantic Service knowledge graph. An example is the SAP Agile Data
Preparation application.

Administrators can also use the SAP HANA ESS Administration tool to publish or unpublish artifacts. This is
useful for applications that manage the life cycle of SAP HANA artifacts but do not want (or cannot easily)
integrate artifact management with Enterprise Semantic Services. An example is the SAP ERP application. In
those cases, it is easier to delegate to an administrator the task of determining which artifacts should be

Administration Guide
72 PUBLIC Administering Data Provisioning
published to Enterprise Semantic Services depending on the needs of application (for example, access to a
semantic service like search).

The best practice is to separate the artifacts published by applications using the on-demand ESS API from
those published by an administrator using the SAP ESS Administration tool. Therefore, the artifacts will belong
to different publisher groups, as shown on the SAP ESS Administration tool Published Artifacts tile.

Related Information

Publish Artifacts [page 73]


Information Available in the Publication Schedule [page 74]
Remove a Schedule for a Deleted Artifact [page 75]
Manage Published Artifacts [page 78]
Information Available in Published Artifacts [page 79]

4.5.3.1 Publish Artifacts

Use the SAP HANA Enterprise Semantic Services (ESS) Administration tool to publish artifacts in the
knowledge graph.

Procedure

1. Select the Publication Schedules tile.


2. In the Published Artifact Browser, expand the nodes and select an artifact.

Note that in the following browsers, you can search for an object within a node using a Filter: Publication
Schedules (catalog only), Published Artifacts, Data Profiling Blacklist, Entity Grid Tags.
1. Right-click the object and select Filter. If the Filter option does not display for the object, select Refresh.
2. In the Filter dialog box, start typing the object name or part of the name. The list filters as you type.
3. Select OK to save the filter on the node. To clear the filter later, right-click the object and select Remove
filter.

 Note

Selecting Refresh on an object removes all filters from the object and its children.

3. To include the artifact, select Include for publication.

Children of the artifact inherit the INCLUDED configuration of the parent unless specifically excluded.
4. Configure the publication schedule as follows.
a. For Next Scheduled Date, click in the box to select a date and time to next publish the artifact.

If you do not enter a date and time, it is set to the current date and time.

Administration Guide
Administering Data Provisioning PUBLIC 73
b. Enter a frequency Period.
5. Configure Data profiling options.
a. Select Discover content type to include content types in the publication; however, this can impact
performance and is not suggested for production scenarios.
b. Select Extract searchable values to extract values; however, this can impact performance and is not
suggested for production scenarios.
6. Select Active to enable the schedule.

To deactivate a schedule, clear the checkbox and click Save.


7. Select Save.

The schedule displays in the Schedules table.


8. To stop (cancel) the publication of an artifact: open Publication Requests monitor, refresh the view, select
the Stop checkbox for the artifact(s), and select the Stop icon in the upper right corner of the window.
9. Optionally, from the navigation menu, open Publication Requests to confirm the status of the request (for
example, REQUEST_PENDING or DONE).
10. To exclude objects, select the object, select Exclude for publication, and select Save.

Results

The objects appear in the browser tree marked with a solid green plus symbol (for included objects) or a solid
red minus symbol (for excluded objects). Inherited objects display an outlined green or red symbol. These
markers indicate that a request action has been initiated but is independent of the actual status of the request.
To view the status of the request, open the Publication Requests monitor. To view the results of requests, view
the Published Artifacts monitor.

4.5.3.2 Information Available in the Publication Schedule

The Publication Schedule displays a table that describes all of the scheduled publication and data profiling
requests.

The search field above the table lets you search for published artifacts.

Column Name Description Filterable

Publication Artifact Fully qualified name of the published artifact Yes

Publication The mode selected for the schedule (INCLUDE, EXCLUDE, IN­ Yes
HERIT)

Next Scheduled Date The timestamp for when the schedule will next execute No

Period The selected frequency of the schedule Yes

Active Whether or not the schedule has been activated No

Discover Whether or not the option for Discover content type was selected No

Administration Guide
74 PUBLIC Administering Data Provisioning
Column Name Description Filterable

Extracted Searchable Val­ Whether or not the option for Extract searchable values was se­ No
ues lected

Warning A warning indicates that a scheduled artifact has been deleted. The Yes
Delete option will be enabled on the schedule.

Delete To delete a schedule for an artifact that has been deleted, select the No
checkbox and click Delete.

4.5.3.3 Remove a Schedule for a Deleted Artifact

Remove a publication schedule for an artifact that was deleted.

Context

When an artifact is configured as included for publication, a publication schedule appears in the table of
schedules. If the artifact gets deleted, the publication schedule will remain until the crawler cannot detect it. A
warning will then appear for that schedule.

Procedure

1. Select the Publication Schedules tile.


2. For the artifact that displays a warning, the Delete option will be enabled on that schedule. Select the
checkbox and click Delete above the column.

4.5.4 Monitor the Status of Publication Requests

Use Publication Requests to view and monitor publishing and profiling requests.

Context

To monitor the status of all requests, select the Publication Requests tile.

A user can search for only those catalog or remote objects that are described in the knowledge graph as a
result of successful publication requests. However, if the name of an artifact unexpectedly does not appear in
the search results, the publication of the corresponding artifact might have failed.

Administration Guide
Administering Data Provisioning PUBLIC 75
Related Information

Information Available on Publication Requests [page 76]

4.5.4.1 Information Available on Publication Requests

Enterprise Semantic Services Publication Requests displays the status and any error messages for each
request.

The search field above the table lets you search for published artifacts.

Column Name Description Filterable

Detail Click the magnifying glass icon to see more details. The Detail page No
displays the following information and statistics:

● Request type (see list below)


● Submission timestamp
● Request status (see list below)
● Publisher (user)
● Publisher Group
● Number of basic artifacts to publish
● Number of basic artifacts to unpublish
● Number of basic artifacts to profile

For the latter three statistics, you can see the associated number of
requests that are Successful, Failed, In progress, or Not started.

The table displays each Publication Artifact and its Publication


Status, Data Profiling Status, Error Code, and Error Message if any.

Requests with a FAILED status include a Retry checkbox. To retry


the request, select the checkbox and click Retry at the top of the
window. To retry all FAILED requests, select the Retry checkbox in
the column heading.

ID A number that helps identify a request in the list of requests. This Yes
number might not be unique in some cases.

Request Date Timestamp of when the request was executed No

Publication Artifact Fully qualified name of catalog object or repository object that was Yes
published

Artifact Type The type of artifact to publish: Yes

● SAP HANA views in the repository: attributeview, analyticview,


calculationview
● SAP HANA catalog objects: table, view, columnview, virtualta­
ble

Administration Guide
76 PUBLIC Administering Data Provisioning
Column Name Description Filterable

Publisher Group Indicates whether a publication was scheduled using the SAP HANA Yes
ESS Administration tool or using the REST API. In the former case,
the predefined publisher group is sap.hana.im.ess.AdminPublish­
erGroup. In the latter case, a call to the publish() API must specify a
publisherGroup parameter that defines the ownership of the speci­
fied publication in the knowledge graph.

Publisher Name of the SAP HANA user who submitted the request Yes

Request Type Request types on the Publication Requests monitor home page in­ Yes
clude:

● ON_DEMAND_PUBLISH
● ON_DEMAND_UNPUBLISH
● SCHEDULED_PUBLISH
● MONITORING_UNPUBLISH
● RETRY_ON_DEMAND_PUBLISH
● RETRY_ON_DEMAND_UNPUBLISH
● RETRY_SCHEDULED_PUBLISH

Request types on the artifact Detail page include following:

● PUBLISH_NOT_STARTED
● UNPUBLISH_NOT_STARTED
● UNPUBLISH_IN_PROGRESS
● PUBLISH_DONE
● PUBLISH_FAILED
● UNPUBLISH_FAILED

Status Status values on the Publication Requests monitor home page in­ Yes
clude:

● REQUEST_PENDING
● IN_PROGRESS
● DONE
● DONE_WITH_ERRORS
● NOTHING_DONE
● STOPPING
● STOPPED

Status values on the artifact Detail page include:

● PROFILING_NOT_STARTED
● PROFILING_IN_PROGRESS
● PROFILING_DONE
● PROFILING_FAILED
● INACTIVATED
● NOT PROFILABLE
● BLACKLISTED
● OBSOLETE
● PUBLICATION_FAILED
● STOPPED

Administration Guide
Administering Data Provisioning PUBLIC 77
Column Name Description Filterable

Error Code Error code when the status is FAILED. Yes

Each range of numbers indicates a specific area as follows:

● 100-199: SAP HANA adapter errors


● 200-399: Prepare, extract, load, and deploy jobs
● 700-799: Miscellaneous

Error Message Description of the error No

Retry To retry one or more requests, select the Retry checkbox for each, No
or select the Retry checkbox in the column heading to select all
failed requests, and select the Retry button.

Stop To display requests that are currently in progress, select the Refresh No
icon to update the Status column. To move the latest publication re­
quests to the top of the list, for Request Date select Sort
Descending.

To stop (cancel) one or more in-progress requests, select the Stop


checkbox for each, or select the Stop checkbox in the column head­
ing to select all requests, and select the Stop button.

4.5.5 Manage Published Artifacts

Use Published Artifacts to view the artifacts that have been published to the knowledge graph and remove
(unpublish) them.

Context

The knowledge graph describes the semantics of published artifacts (datasets). Metadata crawlers and data
profiling requests let you publish artifacts to the knowledge graph. Thereby, applications can search for and
locate these objects and their metadata.

There are two ways to publish artifacts to the knowledge graph: The HTTP REST API publish() method and
the SAP HANA Enterprise Semantic Services Administration tool. If the same artifact gets published by both
mechanisms, the artifact is identified in the Published Artifacts monitor as belonging to a corresponding
publisher group. Therefore, publisher groups define ownership of specific publications in the knowledge graph.

When an artifact is published with a specific publisher group, it can only be unpublished by that group. If the
same artifact has been published with multiple publisher groups, it can only unpublished when all
corresponding publisher groups unpublish it. This control helps avoid conflicts between applications and an
administrator using the Administration tool. Otherwise, an application could publish an artifact and another
application or administrator could unpublish it.

In the case of the HTTP publish() API, the publisher group name is specific to the application; for example
for SAP HANA Agile Data Preparation, it could be com.sap.hana.im.adp. For the SAP HANA ESS Administration
tool, the predefined publisher group name is sap.hana.im.ess.AdminPublisherGroup.

Administration Guide
78 PUBLIC Administering Data Provisioning
To limit the size of both extracted metadata elements and extracted searchable attribute values in knowledge
graph, you can also select artifacts to unpublish.

Procedure

1. Select the Published Artifacts tile.


2. Expand the nodes on the Published Artifact Browser to find the object to view and select it.

Note that in the following browsers, you can search for an object within a node using a Filter: Publication
Schedules (catalog only), Published Artifacts, Data Profiling Blacklist, Entity Grid Tags.
1. Right-click the object and select Filter. If the Filter option does not display for the object, select Refresh.
2. In the Filter dialog box, start typing the object name or part of the name. The list filters as you type.
3. Select OK to save the filter on the node. To clear the filter later, right-click the object and select Remove
filter.

 Note

Selecting Refresh on an object removes all filters from the object and its children.

3. Select the Publisher Group as necessary.


4. The table displays all of the published artifacts, when they were last refreshed, and the number of
metadata elements in each.
5. To remove an artifact and its data profiling information, select its Unpublish checkbox and click Save. To
unpublish all displayed artifacts, select the Unpublish checkbox in the column heading and click Save.

View the Publication Requests monitor to confirm that the object was removed. For example, the Request
Type would indicate MONITORING_UNPUBLISH.

Related Information

Information Available in Published Artifacts [page 79]


Information Available on Publication Requests [page 76]

4.5.5.1 Information Available in Published Artifacts

Enterprise Semantic Services Published Artifacts displays artifacts that have been published and also lets you
remove (unpublish) artifacts from the knowledge graph.

The Published Artifact Browser displays all the published objects available in the Catalog, Content, and Remote
Sources folders. The size of an artifact is measured as the total number of searchable metadata elements and
searchable attribute values extracted from that artifact.

The search field above the table lets you search for published artifacts.

Administration Guide
Administering Data Provisioning PUBLIC 79
Name Description Filterable

Publisher Group Indicates whether a publication was scheduled using the SAP HANA Not applicable
ESS Administration tool or using the REST HTTP publish() API.
When an artifact is published with a specific publisher group, the ar­
tifact can only be unpublished by the same group. If a same artifact
has been published with different publisher groups, the artifact will
be unpublished when all associated groups have unpublished it.

For the SAP HANA ESS Administration tool, the predefined pub­
lisher group name is sap.hana.im.ess.AdminPublisherGroup. For the
REST HTTP publish() API, a call to the publish() API must
specify a publisherGroup parameter that determines the
name of the publisher group.

Publication Artifact name Name of the selected artifact. Not applicable

Number of published arti­ Number of basic artifacts recursively contained in the selected arti­ Not applicable
facts fact when the selected artifact is a container. If the selected artifact
is a basic artifact, the number of published artifacts is equal to 1.

Number of metadata ele­ Total number of extracted metadata elements in the selected arti­ Not applicable
ments fact.

Number of extracted val­ Total number of attribute values extracted in the selected artifact. Not applicable
ues

Publication Artifact Qualified name of an artifact that was published or contains pub­ Yes
lished artifacts. The fully qualified name is described by three at­
Wildcards
tributes:

● Origin: Catalog or Content


● Container: Schema name, package path, or virtual container
name
● Artifact: Name of the artifact (basic or container) in its parent
container

Oldest Refresh Oldest date of updated basic artifacts in the corresponding con­ No
tainer. This date is NULL in the case of a basic artifact.

Last Refresh Most recent date of updated basic artifacts in the corresponding No
container. This date is the last update in the case of a basic artifact.

Basic Artifacts Number of published basic artifacts recursively contained in the Yes
corresponding container. This value is 1 in the case of a basic arti­
fact.

Administration Guide
80 PUBLIC Administering Data Provisioning
Name Description Filterable

Removable Metadata Number of non-shared metadata elements. It indicates the number Yes
of searchable metadata elements extracted from the corresponding
published artifacts that are not shared with other published arti­
facts. This number gives an indication of how many metadata ele­
ments would be removed if you unpublished the artifact.

Removable Values Number of searchable attribute values extracted for the catalog ob­ Yes
ject represented by the published artifact. It indicates the number
of metadata elements that are not shared with other published arti­
facts. This number gives an indication of how many profiled values
would be removed in the case of unpublishing.

Unpublish To unpublish the artifact, select the Unpublish check box and click No
Save. To unpublish all displayed artifacts, select the Unpublish
check box in the column heading and click Save.

4.5.6 Data Profiling

Enterprise Semantic Services can profile the contents of artifacts that have been published to the knowledge
graph.

Data profiling is a process that analyzes the values contained in specific columns of a dataset (the columns to
analyze are specified internally using ESS logic). Analysis of the data in a column discovers business types, and
searchable values can then be extracted and indexed using SAP HANA full text index.

Enterprise Semantic Services can profile the contents of the following artifacts:

● SQL tables
● Column views issued from graphical Calculation views, Attribute, and Analytic views
● Virtual tables created from remote objects of a remote source with the PASSWORD credential type (see
the topic “CREATE REMOTE SOURCE” in the SAP HANA SQL and System Views Reference).

 Note

When requesting profiling of a catalog object that does not result from an activation, you must assign the
role SELECT with grant option to the technical user _HANA_IM_ESS. (For activated objects, there is nothing
to do.)

Current limitations include the following:

● Calculation views with dynamic privileges will not be profiled


● Views with required parameters will not be profiled
● Calculation views with dependencies to other views will only be profiled if they are referenced by exactly
the same set of analytic privileges as their dependent views. It is advised in this version to only create a
single analytic privilege that references all views. Future versions will handle dependencies with different
privileges.

Administration Guide
Administering Data Provisioning PUBLIC 81
Related Information

Limiting Objects to Profile [page 82]


Information Available on the Data Profiling Blacklist [page 83]

4.5.6.1 Limiting Objects to Profile

You can prevent artifacts (limited to catalog objects) from being profiled.

Context

Limiting the artifacts to profile lets you control the volume of searchable attribute values or avoid extracting
searchable values from datasets that hold sensitive or personal data.

To prevent an artifact from being profiled, an administrator can blacklist artifacts. When a catalog object that
was previously profiled is blacklisted, all of its extracted searchable attribute values are immediately removed
from the knowledge graph. The catalog object will never be profiled again, even if there is still a data profiling
schedule associated with the object(s).

To blacklist an artifact, follow these steps:

Procedure

1. Select the Data Profiling Blacklist tile.


2. In the Catalog Object Browser, expand the nodes and select an artifact to blacklist. You can also select a
schema to list all its children then select objects to blacklist from within it.

Note that in the following browsers, you can search for an object within a node using a Filter: Publication
Schedules (catalog only), Published Artifacts, Data Profiling Blacklist, Entity Grid Tags.
1. Right-click the object and select Filter. If the Filter option does not display for the object, select Refresh.
2. In the Filter dialog box, start typing the object name or part of the name. The list filters as you type.
3. Select OK to save the filter on the node. To clear the filter later, right-click the object and select Remove
filter.

 Note

Selecting Refresh on an object removes all filters from the object and its children.

3. To blacklist the artifact, select the Blacklisted check box and click Save. To blacklist all displayed artifacts,
select the Blacklisted check box in the column heading and click Save.

To re-enable data profiling for an artifact, clear the check box and click Save.

Administration Guide
82 PUBLIC Administering Data Provisioning
4.5.6.2 Information Available on the Data Profiling Blacklist

The Enterprise Semantic Services Data Profiling Blacklist lets you view and choose which artifacts to blacklist
(remove data profiling values).

The search field above the table lets you search for published artifacts.

Name Description Filterable

Catalog Object Fully qualified name of the selected object Not applicable

Blacklisted catalog objects Number of blacklisted objects in the selected artifact Not applicable

Extracted values The total number of extracted values for the selected object Not applicable

Schema Name Name of the schema to which the artifact belongs Yes

Wildcards

Catalog Object Catalog object name in the schema Yes

Extracted Values Number of extracted searchable values for the object Yes

Blacklisting Date Timestamp of when the object was blacklisted No

Blacklisted Select the checkbox to blacklist the object and click Save. Clear the No
checkbox to enable data profiling for the object and click Save.

4.5.7 Setting Configuration Parameters

As an Enterprise Semantic Services administrator, you can set configuration parameter values such as
maximum sizes of persistent queues, rolling policy of persistent queues, and so on.

To set configuration parameters, in the SAP HANA studio Administration Console, a system administrator sets
values in the reserved table sap.hana.im.ess.eg.configuration::CONFIGURATION. To do so, specific database
procedures and user privileges are required.

 Example

To set the parameters for "'MAX_ESS_PROFILING_JOBS_SCHEDULE_TIME":

1. Re-enable _HANA_IM_ESS technical user: ALTER USER _HANA_IM_ESS ENABLE PASSWORD


LIFETIME;
2. Connect with the _HANA_IM_ESS user with its password (the one used during installation).
3. Execute the procedure to increase configuration parameter
MAX_ESS_PROFILING_JOBS_SCHEDULE_TIME:

CALL
"SAP_HANA_IM_ESS"."sap.hana.im.ess.eg.configuration::SET_CONFIGURATION_VALU
E"('MAX_ESS_PROFILING_JOBS_SCHEDULE_TIME', value)

where VALUE can be up to 1150000 (the default is 500000 ms).


4. When finished, disable the password: ALTER USER _HANA_IM_ESS DISABLE PASSWORD
LIFETIME;

Administration Guide
Administering Data Provisioning PUBLIC 83
4.5.8 Troubleshooting Enterprise Semantic Services

Troubleshooting solutions, tips, and API error messages for Enterprise Semantic Services.

Troubleshoot Installation Issues [page 84]


Troubleshoot Enterprise Semantic Services (ESS) installation issues.

Troubleshoot Publishing Issues [page 85]


Troubleshoot Enterprise Semantic Services (ESS) publishing issues.

Troubleshoot Data Profiling Issues [page 85]


Troubleshoot Enterprise Semantic Services (ESS) data profiling issues.

Troubleshoot Search Issues [page 86]


Troubleshoot Enterprise Semantic Services (ESS) search issues.

API Error Messages [page 88]


Troubleshoot Enterprise Semantic Services (ESS) API errors.

Troubleshooting Tips [page 91]


Tips for troubleshooting and preparing diagnostic information for SAP support.

Related Information

4.5.8.1 Troubleshoot Installation Issues

Troubleshoot Enterprise Semantic Services (ESS) installation issues.

Symptom When importing the HANA_IM_ESS delivery unit (DU), an activation error occurs and appears in the SAP
HANA studio job log view.

Solution Check the job log details in SAP HANA studio. If the error message is not meaningful, then:

● Access the diagnosis file for the index server.


● From the bottom of the trace log, look for the first Check results message, which should indicate the
root cause of the activation failure and suggest how to solve it.

Symptom The ESS DU has been uninstalled using the uninstallation procedure. When you reimport the ESS DU,
activation errors occur, showing that dependent objects are not found.

Cause Activated objects may have dependent objects that do not yet exist and therefore cause an error.

Solution Check the following:

● Verify that all ESS DUs (including the DEMO DU) have been properly uninstalled through the SAP HANA
Application Lifecycle Management console.
● Verify that all related packages have been deleted (those with a naming of sap.hana.im.ess...); otherwise
remove them as follows:
○ Create a workspace in the Repositories tab of SAP HANA studio.

Administration Guide
84 PUBLIC Administering Data Provisioning
○ Remove the packages from there.

4.5.8.2 Troubleshoot Publishing Issues

Troubleshoot Enterprise Semantic Services (ESS) publishing issues.

Symptom The Publication Requests monitor displays a message that includes the phrase If the failure
repeats, contact SAP support.

Cause Transaction serialization failures in concurrency scenarios.

Solution If the transaction that failed was a publish request, on the Publication Requests monitor for the artifact in
question, select the Retry check box and the Retry button.

Symptom Publishing requests appear as not processed in the SAP HANA ESS Administration tool's Publication
Schedules view. Request Status remains REQUEST PENDING or REQUESTED.

Solution Rerun the installation script install.html to verify the installation..

Symptom A publishing request failed (the Request Status is FAILED in the SAP HANA ESS Administration tool
Publication Schedules view).

Cause 1 SAP HANA view definition format is not supported.

Solution 1 The user can “upgrade” the format of the view by editing (make a small change such as adding a space) and
saving it.

Cause 2 The SAP HANA view is not supported.

Solution 2 No user action.

Cause 3 API error

Solution 3 Invalid arguments have been passed to the API.

Related Information

API Error Messages [page 88]

4.5.8.3 Troubleshoot Data Profiling Issues

Troubleshoot Enterprise Semantic Services (ESS) data profiling issues.

Symptom Publishing requests have been processed and all ESS background jobs are active, but data profiling requests
appear as not processed in the SAP HANA ESS Administration tool Data Profiling Monitor view. Profiling
Status remains as REQUEST PENDING.

Administration Guide
Administering Data Provisioning PUBLIC 85
Cause Investigate the error as follows:
● Enable the trace level for xsa:sap.hana.im.ess to ERROR in the SAP HANA Administration Console
● Inspect the latest diagnosis file xsengine_alert_xxx.trc.
● Check for the following error message:

ESS815=Profiling Internal Error: Script server must be enabled to


profile data

Verify whether the script server was created during installation. To do so, in the SAP HANA Administration
Console, view the Configuration tab, and in daemon.ini, expand scriptserver.

Solution See “Configure Smart Data Quality” in the Installation and Configuration Guide.

Symptom A run-time object has the Request Status of FAILED in SAP HANA ESS Administration tool Data Profiling
Monitor view. An error with message code ESS805 “Insufficient privilege - user xxx
must have SELECT with GRANT option on xxx“ is returned.

Cause If the run-time object is not an activated object, then check that the SELECT right on the run-time object has
been granted WITH GRANT OPTION to the technical user _HANA_IM_ESS.

Solution Run the following SQL command in SAP HANA studio:

GRANT SELECT ON <catalog object> TO _HANA_IM_ESS WITH GRANT OPTION

Symptom A data profiling request failed (the Request Status is FAILED in the SAP HANA ESS Administration tool Data
Profiling Monitor view).

Cause 1 Insufficient privileges to _HANA_IM_ESS.

Solution 1 Rerun the install.html script.

Cause 2 API error

Solution 2 Invalid arguments have been passed to the API. See API Error Messages [page 88].

Related Information

Configure Smart Data Quality

4.5.8.4 Troubleshoot Search Issues

Troubleshoot Enterprise Semantic Services (ESS) search issues.

Symptom The SAP HANA user 'User' cannot perform a search using the ESS API. An internal server error message is
returned to the application.

Cause Investigate the error as follows:

● In the SAP HANA Administration Console, set the trace level for xsa:sap.hana.im.ess to INFO.

Administration Guide
86 PUBLIC Administering Data Provisioning
See Activate Error Trace for Enterprise Semantic Services [page 92].
● Inspect latest diagnosis file xsengine_alert_xxx.trc.
● Check for the following error message:

Error: import: package access failed due to missing authorization (…)

This means that the 'User' who is publishing has not been granted the privilege Role:
sap.hana.im.ess.roles::User.

Solution Grant the following privilege using SQL commands:

CALL
"_SYS_REPO"."GRANT_ACTIVATED_ROLE"('sap.hana.im.ess.roles::User','user')
;

Symptom A search query does not return an expected catalog object that exists in the SAP HANA instance.

OR

Suggestions do not show an expected term, although that term is associated with a database object in the
SAP HANA instance.

Cause Insufficient privileges.

Solution Verify the user who is posing the search query has sufficient privileges to access the searchable elements of
the expected catalog object as in the following table.

Database object type Searchable elements Privileges needed

table, SQL view, virtual table, column metadata Owner, object privilege, READ
view CATALOG, DATA ADMIN

table, SQL view, virtual table Profiled data Owner, SELECT object privilege

column view Profiled data Owner, SELECT object privilege,


analytic privilege

Symptom A search query does not return an expected database object that exists in the SAP HANA instance.

Cause Assuming that the user has sufficient required authorizations, check the syntax of the search query.

Solution See the following examples of common errors.

Search query Unmatched searchable element Correction

Customer name Does not match “NAME1” Name*

Or follow suggested words

Sales ATT Does not match value “AT&T” “AT T” will match AT&T, AT-T, AT/T

Search query Unexpected match Correction

Sales_2006 Sales “sales_2006”

Sales_2007

Unit sales Product unit “unit sales”

Open insurance contract Closed insurance contract +open insurance contract

Administration Guide
Administering Data Provisioning PUBLIC 87
Search query Unexpected match Correction

“open insurance contract”

Symptom Acronyms or abbreviations are not matched by a search query, which, as a result, does not return an expected
database object that exists in the SAP HANA instance.

Cause A configuration is missing in the term-mapping table:


SAP_HANA_IM_ESS.”sap.hana.im.ess.services.search::Mapping”

Solution To modify the entries in the term mapping table, see "Search Term Mapping" in the Modeling Guide for SAP
HANA Smart Data Integration and SAP HANA Smart Data Quality.

Symptom A search query does not return an expected database object that exists in the SAP HANA instance.

Cause Assuming that the user has sufficient authorizations and the syntax of the search query is correct, then use
the SAP HANA ESS Administration tool Publish and Unpublish Monitor view to verify whether the database
object has been successfully published to ESS, that is, its Request Status is SUCCESSFUL.

Solution If the Request Status is FAILED, take the appropriate actions according to the error code and message.

4.5.8.5 API Error Messages

Troubleshoot Enterprise Semantic Services (ESS) API errors.

Error API Action

ESS100=Error occurred Publish Not necessarily an API error. First check trace log file for details on the error.
when extracting metadata
from a publication artifact.

ESS153=Metadata Extrac­ Publish Check API artifact argument or SAP HANA view was concurrently deleted.
tion Internal Error: No view
definition.

ESS154=Metadata Extrac­ Publish Check API artifact argument or package was concurrently deleted.
tion Internal Error: No
package name.

ESS158=Package name Publish Check API artifact argument or artifact was concurrently deleted.
‘{0}’ does not exist.

ESS159=HANA view ‘{0}/ Publish Check API artifact argument or artifact was concurrently deleted.
{1}.{2}’ does not exist.

ESS160=Publication arti­ Publish Check API arguments and list of supported types of publication artifacts.
fact of type ‘{0}’ is not sup­
ported.

Administration Guide
88 PUBLIC Administering Data Provisioning
Error API Action

ESS161=Schema name Publish Check API artifact argument or schema was concurrently deleted.
‘{0}’ does not exist.

ESS162=Catalog object Publish Check API artifact argument or catalog object was concurrently deleted.
XXX does not exist or is
not supported.

ESS163=Invalid publication Publish Not necessarily an API error. Verify the artifact exists.
artifact qualified name
‘{0}’.

ESS164=Invalid container Publish Check API artifact argument or updates happened concurrently. Verify the
path ‘{0}’. Errors 180 – 186 path still exists.

ESS180=Expecting charac­ Publish Parsing error in artifact name. Check API artifact argument.
ter ‘{0}’ but encountered
character ‘{1}’ in the publi­
cation artifact.

ESS181=Close quote with­ Publish Parsing error in artifact name. Check API artifact argument.
out a matching open quote
in string ‘{0}’ of the publi­
cation artifact.

ESS182=Invalid catalog Publish Parsing error in artifact name. Check API artifact argument.
publication artifact [’{0}’].

ESS183=Invalid publication Publish Parsing error in artifact name. Check API artifact argument.
artifact ‘{0}’ [’{1}’].

ESS184=First identification Publish Parsing error in artifact name. Check API artifact argument.
element of the publication
artifact is not a catalog or
content ‘{0}’.

ESS185=Unmatched quote Publish Parsing error in artifact name. Check API artifact argument.
for identification element
{0} in publication artifact.

ESS186=Identification ele­ Publish Parsing error in artifact name. Check API artifact argument.
ment {0} should not be
empty.

Error containing string: Search Search syntax error. First check explanation given in error message. Then
“Search query syntax er­ check search query in trace log with trace level DEBUG.
ror” Errors 500 – 526

Administration Guide
Administering Data Provisioning PUBLIC 89
Error API Action

Error containing string: Search Request message error. First check explanation given in error message. Then
“Request error message” check request message in trace log with trace level DEBUG.
Errors 600 – 645

ESS705=Invalid scope Publish Only roman letters (both lowercase and uppercase), digits, and underscore
name [{0}]. characters are valid.

ESS715=Invalid type filter Publish Invalid argument for typeFilter in API. Check explanation given in error mes­
[{0}] for container [{1}]. sage in trace log.
Please use one of [ta­
ble,virtualtable,view, col­
umnview]

ESS720=External source Publish Check API parameter “source”. Can only be LOCAL.
[{0}] not supported.

ESS725=Mandatory argu­ Publish Mandatory API argument is missing. First check explanation given in error
ment [{0}] is missing in message. Then check search query in trace log with trace level DEBUG.
CT on-demand
function [{1}].
Search

ESS730=Invalid value [{0}] Publish Invalid argument in API. First check explanation given in error message. Then
for argument [{1}] in func­ check search query in trace log with trace level DEBUG.
CT on-demand
tion [{2}], expecting: [{3}].
Search

ESS735=Invalid value [{0}] Publish Invalid argument in API. First check explanation given in error message. Then
at position [{1}] for argu­ check search query in trace log with trace level DEBUG.
CT on-demand
ment [{2}] in function
Search
[{3}], expecting: [{4}].

ESS808=Profiling Error: In­ CT on-demand Request to profile an unknown type of run-time object. First check explana­
valid Runtime Object Type tion given in error message. Then check search query in trace log with trace
{0}. Only the following in­ level DEBUG.
put values are supported:
{1}

ESS809=Profiling Error: CT on-demand Non­profilable column view due to current restrictions.


Column View “{0}”.”{1}”
cannot be profiled

ESS810=Profiling Error: CT on-demand Check API argument or artifact was concurrently deleted.
{0} “{1}”.”{2}” does not ex­
ist. Runtime object does
not exist

ESS811=Profiling Error: At CT on-demand Check API argument.


least one attribute to pro­
file must be given

Administration Guide
90 PUBLIC Administering Data Provisioning
Error API Action

ESS812=Profiling Error: CT on-demand Check API argument or artifact has been updated concurrently.
“{0}” is not attribute of {1}
“{2}”.”{3}”

4.5.8.6 Troubleshooting Tips

Tips for troubleshooting and preparing diagnostic information for SAP support.

Related Information

Troubleshoot Repeated Errors [page 91]


Activate Error Trace for Enterprise Semantic Services [page 92]
Open the Error Log [page 93]
Collect Diagnostic Information [page 93]

4.5.8.6.1 Troubleshoot Repeated Errors

Procedure for error messages in HANA ESS Administration Monitor view that include If the failure
repeats.

Prerequisites

The error trace for Enterprise Semantic Services must be activated before you retry the operation.

Administration Guide
Administering Data Provisioning PUBLIC 91
Context

To retry the failed operation:

Procedure

1. Go to the Enterprise Semantic Services Administration Monitor by entering the following URL in a web
browser:

http://<<your_HANA_instance:port>>/sap/hana/im/ess/ui
2. If the transaction that failed was a publish request, click the Publish and Unpublish Monitor tile, find your
run-time object in the Published Artifact column, and click the Retry icon.

Next Steps

After you retry the operation and it still fails, collect diagnostic information.

4.5.8.6.2 Activate Error Trace for Enterprise Semantic


Services

The error trace obtains detailed information about actions in Enterprise Semantic Services.

Prerequisites

To configure traces, you must have the system privilege TRACE ADMIN.

Context

To activate the error trace for Enterprise Semantic Services, follow these steps:

Procedure

1. Log on to SAP HANA studio with a user that has system privilege TRACE ADMIN.
2. In the Administration editor, choose the Trace Configuration tab.

Administration Guide
92 PUBLIC Administering Data Provisioning
3. Choose the Edit Configuration button for the trace that you want to configure.
4. Expand the XS ENGINE node.

 Note

Ensure you select the checkbox Show All Components.

5. Locate xsa:sap.hana.im.ess and check that system trace level is INFO, ERROR, or DEBUG, because it is
usually set to DEFAULT.
6. Click Finish.

4.5.8.6.3 Open the Error Log

After the Enterprise Semantic Services error trace is activated, find the error message.

Procedure

1. Log on to SAP HANA studio with a user name that has system privilege CATALOG READ.
2. In the Administration editor, choose the Trace Configuration tab and go to Diagnosis Files tab.
3. Look for one of the two most recent files (xsengine_alert_xxx.trc or
indexserver_alert_xxx.trc).
4. Go to the end of the file to see the error.

4.5.8.6.4 Collect Diagnostic Information

Procedure to obtain diagnostic information to send to SAP support.

Procedure

1. Log on to SAP HANA studio with a user name that has the system privilege CATALOG READ.
2. In the Administration editor, choose the Trace Configuration tab and go to Diagnosis Files tab.

3. Choose Diagnosis Information Collect .


4. Send the following information to SAP support:

○ Error message and error code


○ Collected diagnosis files
○ If the error appeared in the Publication Schedules monitor, send all other fields of the displayed row
with the error message.

Administration Guide
Administering Data Provisioning PUBLIC 93
5 Maintaining Connected Systems

Avoid errors and minimize downtime by accounting for SAP HANA smart data integration components when
planning maintenance tasks for connected systems such as source databases and the SAP HANA system.

Maintaining the SAP HANA System [page 94]


Consider any effects to your data provisioning landscape before performing common SAP HANA
maintenance tasks.

Maintaining Source Databases [page 97]


Keep your databases performing at maximum capacity by learning about cleaning log files, recovery,
and preparing for restarts.

Related Information

5.1 Maintaining the SAP HANA System

Consider any effects to your data provisioning landscape before performing common SAP HANA maintenance
tasks.

Most commonly, suspend and resume any data provisioning remote sources before performing SAP HANA
maintenance operations.

Update the SAP HANA System [page 95]


Suspend and resume remote sources when you must update the SAP HANA system.

Takeover/Failback with SAP HANA System Replication [page 95]


Suspend and resume remote sources when you must perform a takeover and failback operation with
SAP HANA system replication.

Failover with SAP HANA Scale-Out [page 96]


Suspend and resume remote sources when you must perform a host auto-failover with SAP HANA
scale-out.

Parent topic: Maintaining Connected Systems [page 94]

Related Information

Maintaining Source Databases [page 97]

Administration Guide
94 PUBLIC Maintaining Connected Systems
5.1.1 Update the SAP HANA System

Suspend and resume remote sources when you must update the SAP HANA system.

Prerequisites

Be sure to back up the SAP HANA system before starting the upgrade process.

Procedure

1. Suspend all remote sources.


2. Update the SAP HANA system.
3. If needed, update each Data Provisioning Agent in your landscape.
4. Resume all remote sources.

Related Information

SAP HANA Server Installation and Update Guide


Suspend and Resume Remote Sources [page 57]
Update the Data Provisioning Agent

5.1.2 Takeover/Failback with SAP HANA System Replication

Suspend and resume remote sources when you must perform a takeover and failback operation with SAP
HANA system replication.

Prerequisites

If the SAP HANA system is used as a data source, both the primary and secondary systems must be configured
with a virtual IP.

Administration Guide
Maintaining Connected Systems PUBLIC 95
Procedure

1. Suspend all remote sources.


2. Perform a takeover on the secondary SAP HANA system.
3. Perform a failback on the former primary SAP HANA system.
4. Resume all remote sources.

Next Steps

Following an unplanned failback operation, monitor remote subscription exceptions in the Data Provisioning
Remote Subscription Monitor. To clear any exceptions, click Retry Operation.

5.1.3 Failover with SAP HANA Scale-Out

Suspend and resume remote sources when you must perform a host auto-failover with SAP HANA scale-out.

Prerequisites

If the SAP HANA system is used as a data source, both the active and standby hosts must be configured with a
virtual IP.

Procedure

1. Suspend all remote sources.


2. Stop the active SAP HANA host.
3. Wait for the standby host to take over operations from the primary host.
4. Resume all remote sources.

Next Steps

Following an unplanned failover operation, monitor remote subscription exceptions in the Data Provisioning
Remote Subscription Monitor. To clear any exceptions, click Retry Operation.

Administration Guide
96 PUBLIC Maintaining Connected Systems
Related Information

Setting Up Host Auto-Failover (SAP HANA Administration Guide)


Suspend and Resume Remote Sources [page 57]

5.2 Maintaining Source Databases

Keep your databases performing at maximum capacity by learning about cleaning log files, recovery, and
preparing for restarts.

Restart the Source Database [page 98]


When you must restart a source database, stop any remote source capture and restart the capture
after restarting the database.

Change the Source Database User Password [page 98]


When you change the password for the source database user, you must update any remote sources
that access the database with that user.

Cleaning Up LogReader Archives [page 99]


Avoid disk space issues and ensure smooth real-time replication by regularly cleaning up the
LogReader archives.

Recover from Missing LogReader Archives [page 102]


When archive logs are missing, replication fails. There are multiple solutions for recovering and
restarting replication.

Recover from LogReader Database Upgrades [page 103]


After upgrading a source database, you may need to force a version migration to resume remote
sources on LogReader adapters.

Change the Primary Archive Log Path During Replication [page 104]
Replication isn’t impacted when the primary archive log path is changed during replication.

Maintain the Source Database Without Propagating Changes to SAP HANA [page 104]
Use the Maintenance User Filter to define a source database user that can perform maintenance tasks
in a source database, without having the changes propagated to the SAP HANA system through data
provisioning adapters.

Recover with Microsoft SQL Server Always On Failover [page 105]


Re-execute a replication task when Microsoft SQL Server fails over during the initial load.

Recover with SAP HANA System Replication Failover [page 106]


Re-execute a replication task when SAP HANA fails over during the initial load.

Parent topic: Maintaining Connected Systems [page 94]

Administration Guide
Maintaining Connected Systems PUBLIC 97
Related Information

Maintaining the SAP HANA System [page 94]

5.2.1 Restart the Source Database

When you must restart a source database, stop any remote source capture and restart the capture after
restarting the database.

Procedure

1. Suspend capture on any remote sources that access the database.


2. Restart the source database.
3.

Related Information

Suspend and Resume Remote Sources [page 57]

5.2.2 Change the Source Database User Password

When you change the password for the source database user, you must update any remote sources that access
the database with that user.

Procedure

1. Suspend capture on any remote sources that access the database.


2. In the SAP HANA Web-based Development Workbench, locate the remote source and change the
password in the remote source credentials properties.
3. Resume capture on any remote sources that access the database.

Related Information

SAP HANA System Replication (SAP HANA Administration Guide)

Administration Guide
98 PUBLIC Maintaining Connected Systems
Suspend and Resume Remote Sources [page 57]
Suspend and Resume Remote Sources [page 57]

5.2.3 Cleaning Up LogReader Archives

Avoid disk space issues and ensure smooth real-time replication by regularly cleaning up the LogReader
archives.

The Oracle, DB2, and MS SQL LogReader adapters rely on the log reader archive log to retrieve changed data
from the source databases. Over time, the archive log can grow to consume large amounts of disk space if it
isn’t cleaned up.

Clean the Archive Log on Oracle [page 99]


Identify the Log Sequence Number (LSN) and use the RMAN utility to clean the log.

Clean the Archive Log on Microsoft SQL Server [page 100]


Identify the Log Sequence Number (LSN) and truncate the log using a Microsoft SQL Server command.

Clean the Archive Log on DB2 [page 101]


Identify the Log Sequence Number (LSN) and use the DB2 utility to clean the log.

Related Information

5.2.3.1 Clean the Archive Log on Oracle

Identify the Log Sequence Number (LSN) and use the RMAN utility to clean the log.

Procedure

1. Identify the ending truncation point of the last commit transaction in the SAP HANA server.

SELECT SRC.REMOTE_SOURCE_NAME, MIN(SUB_M.LAST_PROCESSED_BEGIN_SEQUENCE_ID)


FROM SYS.M_REMOTE_SUBSCRIPTIONS SUB_M INNER JOIN SYS.REMOTE_SUBSCRIPTIONS_ SUB
ON SUB_M.SUBSCRIPTION_NAME = SUB.SUBSCRIPTION_NAME
INNER JOIN SYS.REMOTE_SOURCES_ SRC
ON SUB.REMOTE_SOURCE_OID = SRC.REMOTE_SOURCE_OID
WHERE LENGTH(LAST_PROCESSED_BEGIN_SEQUENCE_ID) > 0 and SRC.REMOTE_SOURCE_NAME
= '<remote_source_name>' GROUP BY SRC.REMOTE_SOURCE_NAME

The ending truncation point is returned. For example:

00000000000000000000000000000000000000000000000000000000000000000000000001298ee5
00000001000003940000670f01e0000001298ee20000000000000000

Administration Guide
Maintaining Connected Systems PUBLIC 99
2. Use the ending truncation point to identify the LSN of the log file in Oracle.

SQL>SET SERVEROUTPUT ON FORMAT WRAPPED


DECLARE LSN NUMBER(38);
SCN NUMBER(38);
SEQID VARCHAR2(200):='<truncation_point>';
BEGIN
SELECT TO_NUMBER(SUBSTR(SEQID,109,12),'XXXXXXXXXXXXXXX') INTO SCN FROM DUAL;
SELECT SEQUENCE# INTO LSN FROM (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE
ARCHIVED='YES' AND DELETED='NO' AND FIRST_CHANGE#<= SCN AND NEXT_CHANGE# >
SCN) UNION (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE ARCHIVED='YES' AND
DELETED='NO' AND FIRST_CHANGE#<= SCN AND NEXT_CHANGE# > SCN);
DBMS_OUTPUT.PUT_LINE('THE LSN IS: ' || TO_CHAR(LSN));
END;
/

The LSN is returned. For example: 916


3. Identify the number of the Oracle redo log group.

SQL> select count(GROUP#) from V$LOG;

The number of the Oracle redo log group is returned. For example: 3
4. Use the RMAN utility to clean the archive log.
The log files that can safely be cleaned have a sequence number smaller than the LSN minus the number
of the Oracle redo log group.

For example, with an LSN of 916 and an Oracle redo log group of 3, logs through sequence number 913
may be safely cleaned.

rman target sys/<sys_password>@<SID>


RMAN> delete archive log until sequence <sequence_number>;

5.2.3.2 Clean the Archive Log on Microsoft SQL Server

Identify the Log Sequence Number (LSN) and truncate the log using a Microsoft SQL Server command.

Procedure

1. Identify the end LSN of the last commit transaction in the SAP HANA server.

SELECT SRC.REMOTE_SOURCE_NAME, MIN(SUB_M.LAST_PROCESSED_BEGIN_SEQUENCE_ID)


FROM SYS.M_REMOTE_SUBSCRIPTIONS SUB_M INNER JOIN SYS.REMOTE_SUBSCRIPTIONS_ SUB
ON SUB_M.SUBSCRIPTION_NAME = SUB.SUBSCRIPTION_NAME
INNER JOIN SYS.REMOTE_SOURCES_ SRC
ON SUB.REMOTE_SOURCE_OID = SRC.REMOTE_SOURCE_OID
WHERE LENGTH(LAST_PROCESSED_BEGIN_SEQUENCE_ID) > 0 and SRC.REMOTE_SOURCE_NAME
= '<remote_source_name>' GROUP BY SRC.REMOTE_SOURCE_NAME

The ending truncation point is returned. For example:

000000000000000000000000000000000000000000000000000000000000000000000000032b0000
0ef80004000004010000032b00000ef8000100000ef0000300000000

Administration Guide
100 PUBLIC Maintaining Connected Systems
2. Use the truncation point to clean the archive log in MS SQL.

declare @seq_id_str varchar(200) = '<>';


declare @xact_seqno_str varchar(200) = substring(@seq_id_str,
len(@seq_id_str) - 34*2+1, 10*2)
declare @xdesid binary(10) = convert(binary(10),'00', 2)
declare @xact_seqno binary(10) = convert(binary(10),@xact_seqno_str, 2)
print 'Commit LSN: '
print @xact_seqno
exec sp_repldone @xdesid, @xact_seqno, 0,0,0
exec sp_replflush

5.2.3.3 Clean the Archive Log on DB2

Identify the Log Sequence Number (LSN) and use the DB2 utility to clean the log.

Procedure

1. Identify the current LSN in the DB2 database.

db2 => SELECT SUBSTR(RAVALUE,41,12) FROM


<pds_username>.RA_XLOG_SYSTEM_ WHERE RAKEY = 'ltm_locator'

 Note

<pds_username> is the same user specified in the remote source in SAP HANA.

The LSN is returned. For example: 0000004ff413


2. Identify the database path in DB2.

db2 list active databases

The database path is returned.


3. Navigate to the DB2 database path, and identify the archive log file containing the current LSN.

db2flsn -q <log_sequence_number>

The filename of the log containing the current LSN is returned. For example: S0000354.LOG

Archive logs older than this file can safely be cleaned.


4. Identify the first archive log method in DB2.

db2 "get db cfg for <database_name>" | grep -E "First log archive method"

a. If the first archive log method is LOGRETAIN, use a DB2 command to delete the old log files.

db2=> prune logfile prior to <log_file_containing_LSN>

b. If the first archive log method isn’t LOGRETAIN, delete log files older than the identified LSN log
manually from the archive log path on the DB2 host.

Administration Guide
Maintaining Connected Systems PUBLIC 101
5.2.4 Recover from Missing LogReader Archives

When archive logs are missing, replication fails. There are multiple solutions for recovering and restarting
replication.

 Tip

If a secondary archive log path is set before replication, replication with automatically switch to the
secondary log path if an archive log is missing from the primary path.

Process any exceptions in the Data Provisioning Remote Subscription Monitor.

Procedure

● Restore the missing archive logs from a backup location.


An exception is raised when an archive log file can’t be accessed during replication.
a. In the Data Provisioning Remote Subscription Monitor, click the error in the Remote Source Monitor or
Remote Subscription Monitor tables.
For example:

LogReader is in ERROR state [ORA-00308: cannot open archived log '/


rqa16clnx3_work1/oraclearchive/o12lnxrdb/1_2581_896793021.dbf'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or
directory
Additional information: 3
]. Check LogReader log for details.
Context: null

If the error contains the filename of the missing archive log, the log can be restored from a backup. If
the error doesn’t contain a missing archive log filename, clean the archive log safely.
b. Restore the missing archive log file from the backup location on the source machine.
For example:

cp /rqa16clnx3_work1/oraclearchive/backup/1_2581_896793021.dbf /
rqa16clnx3_work1/oraclearchive/o12lnxrdb

c. Suspend and resume remote source capture to recover the replication.


d. In the Data Provisioning Remote Subscription Monitor, click the error in the CDC status column.
e. Select the error for the adapter, and click Retry Operation.
f. Select the error for the receiver, and click Retry Operation.
g. In the Data Provisioning Remote Subscription Monitor, verify that the CDC status no longer indicates an
error.
● Recover when the missing archive logs can’t be restored.
a. Drop all replication tasks with real-time enabled.
b. Re-create the replication tasks with real-time enabled.

Administration Guide
102 PUBLIC Maintaining Connected Systems
5.2.5 Recover from LogReader Database Upgrades

After upgrading a source database, you may need to force a version migration to resume remote sources on
LogReader adapters.

Context

After a Microsoft SQL Server or Oracle source database is upgraded, the associated log reader adapter may be
unable to resume remote sources, and an error such as the following appears in the framework trace log:

2018-10-26 15:49:30,403 [ERROR] MssqlRepAgentWrapper |


MssqlRepAgentWrapper.resume - Failed to resume LogReader. It is in REPLICATE
status.
Your request cannot be carried out because the
existing XLog at the primary database <DV1> requires migration.
at
com.sybase.ra.ltm.LTM.startReplication(LTM.java:1079)

LogReader adapters store the source database version in an internal table. When a remote source is resumed,
the adapter compares the current database version with the stored version and prompts a migration if the
versions don’t match.

Add a parameter to the Data Provisioning Agent configuration file to force a version migration.

Procedure

1. In the agent configuration file, add the migration parameter for your database type.

○ For Oracle, logreader.oracle.migrate=true


○ For Microsoft SQL Server, logreader.mssql.migrate=true

By default, the agent configuration file is located at <DPAgent_root>/dpagentconfig.ini.

 Note

No migration is necessary for the DB2 LogReader adapter.

2. Restart the Data Provisioning Agent.


3. For Microsoft SQL Server, if the remote source is configured with "CDC mode"="Native", execute
mssql_server_init.sql again.
4. Resume the remote source.

Administration Guide
Maintaining Connected Systems PUBLIC 103
5.2.6 Change the Primary Archive Log Path During
Replication

Replication isn’t impacted when the primary archive log path is changed during replication.

For example, if a new primary log path is set in Oracle:

SQL> SELECT DESTINATION FROM V$ARCHIVE_DEST WHERE DEST_NAME='LOG_ARCHIVE_DEST_1';


DESTINATION
--------------------------------------------------------------------------------
D:\oracle12\archive\o12ntpdb1
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=D:\oracle12\archive\o12ntpdb1
_2';
System altered.
SQL> SELECT DESTINATION FROM V$ARCHIVE_DEST WHERE DEST_NAME='LOG_ARCHIVE_DEST_1'
;
DESTINATION
--------------------------------------------------------------------------------
D:\oracle12\archive\o12ntpdb1_2

You can use the Data Provisioning Remote Subscription Monitor to verify that the replication for all tables hasn’t
been impacted.

5.2.7 Maintain the Source Database Without Propagating


Changes to SAP HANA

Use the Maintenance User Filter to define a source database user that can perform maintenance tasks in a
source database, without having the changes propagated to the SAP HANA system through data provisioning
adapters.

The actions that can be filtered depend on the source database:

● For Oracle and Microsoft SQL LogReader adapters, database transactions including INSERT, UPDATE, and
DELETE, and DDL changes such as ALTER TABLE
● For the SAP HANA adapter, database transactions such as INSERT, UPDATE, and DELETE

Prerequisites

Determine the source database user that performs the maintenance tasks.

Procedure

1. Suspend any remote sources for the associated source database.

2. In the SAP HANA Web-based Development Workbench, choose Provisioning Remote Sources and
right-click the remote source.

Administration Guide
104 PUBLIC Maintaining Connected Systems
3. In the Maintenance User Filter option, specify the username of the database maintenance user.
4. Re-enter the logon credentials for the source database, and save the changes.
5. Resume the remote sources for the source database.

Related Information

Suspend and Resume Remote Sources [page 57]

5.2.8 Recover with Microsoft SQL Server Always On Failover

Re-execute a replication task when Microsoft SQL Server fails over during the initial load.

Context

If a Microsoft SQL Server failover happens during the initial load of a replication task execution, the replication
task fails.

For example:

Internal error: Remote execution error occurred when getting next row from
Result Set.

 Note

If the failover occurs during real-time replication, no action is required, and the replication continues
automatically.

Procedure

1. In the Data Provisioning Design Time Object Monitor locate the replication task marked as FAILED.
2. After the Microsoft SQL Server failover completes, re-execute the replication task.

Administration Guide
Maintaining Connected Systems PUBLIC 105
5.2.9 Recover with SAP HANA System Replication Failover

Re-execute a replication task when SAP HANA fails over during the initial load.

Context

If an SAP HANA failover happens during the initial load of a replication task execution, the replication task fails.

For example:

Internal error: sql processing error.

 Note

If the failover occurs during real-time replication, see Failover with SAP HANA Scale-Out [page 96].

Procedure

1. In the Data Provisioning Design Time Object Monitor locate the replication task marked as FAILED.
2. After the SAP HANA failover completes, re-execute the replication task.

Administration Guide
106 PUBLIC Maintaining Connected Systems
6 Troubleshooting and Recovery Operations

This section describes common troubleshooting scenarios for your SAP HANA smart data integration
landscape, as well as recovery steps to follow when errors occur.

Troubleshooting Real-Time Replication Initial Queue Failures [page 107]


Diagnose and resolve common failure scenarios for initial queues in real-time replication tasks.

Recovering from Replication Failures [page 125]


Replication tasks may fail and generate remote source or subscription exceptions for a number of
reasons.

Recovering from Crashes and Unplanned System Outages [page 136]


Resume replication and ensure data consistency when you experience a crash or unplanned system
outage.

Troubleshooting Data Provisioning Agent Issues [page 141]


This section describes error situations related to the Data Provisioning Agent and their solutions.

Troubleshooting Other Issues [page 148]


This section describes various issues unrelated to replication failures or the Data Provisioning Agent
and their solutions.

Related Information

6.1 Troubleshooting Real-Time Replication Initial Queue


Failures

Diagnose and resolve common failure scenarios for initial queues in real-time replication tasks.

Resolve User Privilege Errors [page 108]


Adapters used for real-time replication require the remote source database user to be configured with
privileges specific to the source database type.

Resolve Remote Source Parameter Errors [page 109]


A replication task may fail if remote source parameters are specified with invalid or out-of-range values,
or if values for any mandatory dependent parameters aren’t specified.

Resolve Improper Source Database Configuration [page 110]


Real-time replication tasks may fail when the remote source database hasn’t been properly configured
to support real-time replication.

Resolve Improper Adapter Configurations on the Agent [page 120]

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 107
Real-time replication tasks may fail when the associated adapters haven’t been properly configured on
the Data Provisioning Agent.

Resolve Uncommitted Source Database Transactions [page 122]


Real-time replication tasks may fail when attempting to queue a remote subscription if the source
database or table has uncommitted transactions.

Resolve Log Reader Instance Port Conflicts [page 123]


Log reader adapters require an instance port that must not be used by any other applications on the
Data Provisioning Agent host.

Resolve Data Provisioning Server Timeouts [page 124]


When the default message timeout value for the Data Provisioning Server is too short, real-time
replication tasks may fail during the initial load operation.

Load Clustered and Pooled Table Metadata into SAP HANA [page 125]
Real-time replication tasks with clustered or pooled tables may fail when metadata hasn’t been
correctly loaded into the SAP HANA database.

Related Information

6.1.1 Resolve User Privilege Errors

Adapters used for real-time replication require the remote source database user to be configured with
privileges specific to the source database type.

Context

Insufficient user privileges are indicated in the <DPAgent_root>/log/framework.trc trace log file.

For example:

Required privileges and/or roles not granted to the database user [ZMTEST].
Missing privileges and/or roles are [SELECT ANY TRANSACTION]

Resolve the error by granting the missing user privileges.

Procedure

1. Connect to the source database with a DBA administrator user.


2. Grant the missing privileges required for the source database to the user specified in the remote source.
3. Re-create or edit the existing replication task.

Administration Guide
108 PUBLIC Troubleshooting and Recovery Operations
4. Execute the replication task.

Related Information

Create a DB2 UDB User and Grant Permissions


Create Users and Grant Privileges
SAP HANA Remote Source Configuration
Oracle Database Permissions

6.1.2 Resolve Remote Source Parameter Errors

A replication task may fail if remote source parameters are specified with invalid or out-of-range values, or if
values for any mandatory dependent parameters aren’t specified.

Context

Remote source parameter errors are indicated in the <DPAgent_root>/log/framework.trc log file, and
may vary based on the specific remote source parameter.

For example, the following scenarios may cause a remote source parameter error:

● Remote source parameter value is outside the valid range.


For example, if the Oracle maximum operation queue size is set to a value outside of the valid range, such
as 10.

[ERROR]
com.sap.hana.dp.oraclelogreaderadapter.OracleLogReaderAdapter.cdcOpen[669]
- Adapter validation failed.
com.sap.hana.dp.cdcadaptercommons.validation.ValidationException: Failed to
validate properties. Error(s):
The value [10] of property [lr_max_op_queue_size] is not in the range [25,
2147483647].

● An Oracle remote source is configured to use TNSNAMES, but the TNSNAMES file and connection
parameters aren’t specified.

[ERROR] WorkerThread.processRequest - Adapter validation failed. Failed to


validate properties. Error(s):
Property [pds_tns_connection] is mandatory.
Property [pds_tns_filename] is mandatory. Context: null
com.sap.hana.dp.adapter.sdk.AdapterException: Adapter validation failed.
Failed to validate properties. Error(s):
Property [pds_tns_connection] is mandatory.
Property [pds_tns_filename] is mandatory.

● An Oracle remote source is configured as a Multitenant Database, but the container database service
name, pluggable database service name, or Oracle multitenant credentials aren’t specified.

[ERROR] WorkerThread.processRequest - Adapter validation failed. Failed to


validate properties. Error(s):

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 109
Property [cdb_password] is mandatory.
Property [cdb_service_name] is mandatory.
Property [cdb_username] is mandatory.
Property [pds_service_name] is mandatory. Context: null
com.sap.hana.dp.adapter.sdk.AdapterException: Adapter validation failed.
Failed to validate properties. Error(s):
Property [cdb_password] is mandatory.
Property [cdb_service_name] is mandatory.
Property [cdb_username] is mandatory.
Property [pds_service_name] is mandatory.

● A user name is specified incorrectly for a case-sensitive adapter.

[ERROR] WorkerThread.processRequest - Adapter validation failed. User Name


[zmtest] does not exist in the source database. Note that 'User Name' option
is case-sensitive. Context: null
com.sap.hana.dp.adapter.sdk.AdapterException: Adapter validation failed. User
Name [zmtest] does not exist in the source database. Note that 'User Name'
option is case-sensitive.

Resolve the error by specifying complete remote source parameter values within the valid range.

Procedure

1. Alter the remote source parameter and specify a value within the valid range or any missing dependent
parameter values.
2. Re-create the replication task or edit the existing replication task.
3. Execute the replication task.

Related Information

Configure Data Provisioning Adapters


Alter Remote Source Parameters [page 59]

6.1.3 Resolve Improper Source Database Configuration

Real-time replication tasks may fail when the remote source database hasn’t been properly configured to
support real-time replication.

Enable the Oracle Archive Log [page 111]


Real-time replication tasks on Oracle remote sources may fail if the Oracle archive log hasn’t been
enabled.

Enable Supplemental Logging on Oracle [page 113]


Real-time replication tasks on Oracle remote sources may fail if supplemental logging on the Oracle
source database hasn’t been enabled or doesn’t match the supplemental logging parameter in the
remote source.

Administration Guide
110 PUBLIC Troubleshooting and Recovery Operations
Enable the Secure File LOB Setting on Oracle [page 114]
Oracle LOB data may not replicate correctly when the DB_SECUREFILE setting is set to “ALWAYS” or
“PREFERRED”.

Specify the Oracle Service Name [page 114]


If the Oracle service name isn’t specified correctly, the remote source can be created but remote tables
can’t be browsed.

Configure the Microsoft SQL Server Transaction Log [page 115]


Replication tasks may fail if the Microsoft SQL Server transaction log can’t be read or is full.

Initialize the Microsoft SQL Server Database [page 116]


Replication tasks may fail when the data capture mode is set to “Native Mode” and the Microsoft SQL
Server hasn’t been initialized.

Enable the DB2 Archive Log [page 117]


Real-time replication tasks on DB2 remote sources may fail if the DB2 archive log hasn’t been enabled.

Create the Temporary Tablespace on DB2 [page 117]


Real-time replication tasks on DB2 remote sources may fail if the user temporary tablespace hasn’t
been created.

Specify a Valid DB2 Port Number [page 118]


Remote sources for DB2 can’t be created if the DB2 port is outside the allowed range of 1-65335.

Verify the DB2 Native Connection Settings [page 119]


The LogReader may fail to initialize properly when the DB2 native connection fails.

Define the SAP ASE Adapter Interface [page 119]


Real-time replication tasks on the SAP ASE adapter may fail if an entry for the adapter hasn’t been
added to the interface file of the data server.

Related Information

6.1.3.1 Enable the Oracle Archive Log

Real-time replication tasks on Oracle remote sources may fail if the Oracle archive log hasn’t been enabled.

Context

A missing archive log may be indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[ERROR] com.sap.hana.dp.cdcadaptercommons.StatefulAdapterCDC
$CDCOpened.addSubscription[791] - [oracleSrc] Failed to add the first
subscription. SubscriptionSpecification [header=remoteTableId=466,
remoteTriggerId=178688, sql=SELECT "T1"."INT_C1", "T1"."VARCHAR_C2" FROM

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 111
"""LR_USER"".""TESTTB1""" "T1" , subscription=, customId=, seqID=SequenceId
[value=[0, 0, 0, 0]], isLastSubscription=true, withSchemaChanges=false,
firstSubscriptionOnTable=true, lastSubscriptionOnTable=true]
com.sap.hana.dp.adapter.sdk.AdapterException: Failed to start Log Reader because
of failure to initialize LogReader. Error:Oracle LogMiner must be installed in
order to use this command. Please verify that LogMiner is installed.

An additional error may be indicated in the <DPAgent_root>/log/repagent.log log file.

For example:

ERROR com.sybase.ds.oracle.logmnr.LogMiner Failed to start Log Miner:


STARTSCN=2814017, ENDSCN=2814017
ERROR com.sybase.ds.oracle.logmnr.LogMiner Failed to start LogMiner when
verifying LogMiner installation. ORA-01325: archive log mode must be enabled to
build into the logstream

Resolve the error by enabling the archive log on the Oracle source database.

Procedure

1. On the Oracle source database, check whether archive logging is enabled.

SQL> SELECT LOG_MODE FROM V$DATABASE;

If the result for LOG_MODE isn’t ARCHIVELOG, archive logging is disabled.


2. Enable archive logging.

SQL> SHUTDOWN IMMEDIATE;


SQL> CONNECT SYS/PASSWORD AS SYSDBA;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

3. Specify a local archive log directory.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=<local\file\path>';

4. Verify the archive destination.

SQL> SELECT DESTINATION FROM V$ARCHIVE_DEST WHERE


DEST_NAME='LOG_ARCHIVE_DEST_1';
DESTINATION

Administration Guide
112 PUBLIC Troubleshooting and Recovery Operations
6.1.3.2 Enable Supplemental Logging on Oracle

Real-time replication tasks on Oracle remote sources may fail if supplemental logging on the Oracle source
database hasn’t been enabled or doesn’t match the supplemental logging parameter in the remote source.

Context

An error may be indicated in the <DPAgent_root>/log/framework.trc log file in the following scenarios.

● Minimum database-level supplemental logging is disabled.


For example:

[ERROR]
com.sap.hana.dp.adapter.framework.core.WorkerThread.processRequest[329] -
Adapter validation failed. Minimal supplemental logging not enabled.

● The remote source parameter is set to “database”, but PRIMARY KEY and UNIQUE KEY database-level
supplemental logging isn’t enabled.
For example:

[ERROR]
com.sap.hana.dp.adapter.framework.core.WorkerThread.processRequest[329] -
Adapter validation failed. Database PRIMARY KEY and/or UNIQUE supplemental
logging is not enabled.

● The remote source parameter is set to “table”, but table-level supplemental logging isn’t enabled.
For example:

[ERROR]
com.sap.hana.dp.adapter.framework.core.WorkerThread.processRequest[329] -
Adapter validation failed. Primary key and/or unique key supplemental logging
is not turned on for these Oracle system tables: [LOBFRAG$, TABPART$,
TABSUBPART$, COLTYPE$, INDSUBPART$, MLOG$, TYPE$, INDCOMPART$, NTAB$,
TABCOMPART$, COLLECTION$, LOB$, PROCEDUREINFO$, SNAP$, OPQTYPE$, DEFERRED_STG
$, LOBCOMPPART$, ARGUMENT$, RECYCLEBIN$, SEQ$, ATTRIBUTE$, INDPART$]

Resolve the error by enabling a supplemental logging level that matches the remote source configuration.

Procedure

● Enable minimum database-level supplemental logging.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

● Enable PRIMARY KEY and UNIQUE KEY database-level supplemental logging.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;

● Enable table-level supplemental logging on each table specified in the error message.

ALTER TABLE <TABLE_NAME> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 113
For example:

ALTER TABLE SYS.ARGUMENT$ ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE SYS.ARGUMENT$ ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE SYS.ATTRIBUTE$ ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE SYS.ATTRIBUTE$ ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
...

6.1.3.3 Enable the Secure File LOB Setting on Oracle

Oracle LOB data may not replicate correctly when the DB_SECUREFILE setting is set to “ALWAYS” or
“PREFERRED”.

Context

The Oracle Log Reader adapter supports Secure File LOB only when the primary database setting
DB_SECUREFILE is set to “PERMITTED”.

Procedure

1. Set the Oracle parameter to “PERMITTED”.

SQL> alter system set db_securefile = 'PERMITTED';

2. Verify the DB_SECUREFILE setting.

SQL> show parameter db_securefile

6.1.3.4 Specify the Oracle Service Name

If the Oracle service name isn’t specified correctly, the remote source can be created but remote tables can’t
be browsed.

Context

When the Oracle service name is set as a domain name, the “Database Name” remote source parameter must
be specified as the Oracle service name.

Administration Guide
114 PUBLIC Troubleshooting and Recovery Operations
For example, when checking the Oracle service name:

SQL> show parameter service_names;


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string <qualified.domain.name>

Procedure

1. In the listener.ora file, enable the service listener.

Set the USE_SID_AS_SERVICE_LISTENER parameter to “ON”.

For example:

USE_SID_AS_SERVICE_LISTENER=ON
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = <agent_hostname>)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)

2. Restart the Oracle listener.


3. Re-create the replication task.

6.1.3.5 Configure the Microsoft SQL Server Transaction


Log

Replication tasks may fail if the Microsoft SQL Server transaction log can’t be read or is full.

Context

● An unreadable transaction log error may be indicated in the <DPAgent_root>/log/framework.trc log


file.
For example:

ERROR com.sybase.ds.mssql.log.device.LogDevice The log file <C:\Program


Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\ms2014db3_log.ldf>
is being locked by SQL Server process.

Resolve the error by installing the sybfilter driver and making the log file readable.
● If the Microsoft SQL Server transaction log is full, the following error may be reported:

com.microsoft.sqlserver.jdbc.SQLServerException: The transaction log for


database 'BUDGET_LCL_DTM_HANA' is full due to 'ACTIVE_TRANSACTION'.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 115
Verify that the auto-extend option for the Microsoft SQL Server log file is enabled, and that the disk where
the log file is located has available free space.

Related Information

Make Log Files Readable

6.1.3.6 Initialize the Microsoft SQL Server Database

Replication tasks may fail when the data capture mode is set to “Native Mode” and the Microsoft SQL Server
hasn’t been initialized.

Context

The Microsoft SQL Server database must be initialized to ensure that the log reader adapter can open the
supplemental log of each table marked for replication.

 Note

Each Microsoft SQL Server needs to be initialized only one time.

When the database hasn’t been initialized, an error may be indicated.

For example:

Error: (256, 'sql processing error: QUEUE: QATUSER_REPTEST_START: Failed to add


subscription for remote subscription QATUSER_REPTEST_START.Error: exception
151050: CDC add subscription failed: Failed to add the first subscription.
Error: Failed to start Log Reader because of failure to initialize LogReader.
Error: The server is not initialized. Please run server_admin init first.\n\n:
line 1 col 1 (at pos 0)')

Resolve the error by configuring the primary data server.

Related Information

Configure the Primary Data Server for the First Time

Administration Guide
116 PUBLIC Troubleshooting and Recovery Operations
6.1.3.7 Enable the DB2 Archive Log

Real-time replication tasks on DB2 remote sources may fail if the DB2 archive log hasn’t been enabled.

Context

A missing archive log may be indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[ERROR]
com.sap.hana.dp.adapter.framework.core.WorkerThread.processRequest[329] -
Failed to add the first subscription. Error: Failed to start Log Reader because
of failure to initialize LogReader. Error: An error occured while getting DB
parameter <LOGARCHMETH1>. Exception: DB2 SQL Error: SQLCODE=-286,
SQLSTATE=42727, SQLERRMC=8192;QARUSER, DRIVER=4.18.60

Resolve the error by setting the primary DB2 UDB database transaction logging to archive logging.

Related Information

Verify the Current Archive Setting of the Transaction Log

6.1.3.8 Create the Temporary Tablespace on DB2

Real-time replication tasks on DB2 remote sources may fail if the user temporary tablespace hasn’t been
created.

Context

A temporary tablespace error may be indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[ERROR]
com.sap.hana.dp.adapter.framework.core.WorkerThread.processRequest[329] -
Failed to add the first subscription. Error: Failed to start Log Reader because
of failure to initialize LogReader. Error: An error occured while getting DB
parameter <LOGARCHMETH1>. Exception: DB2 SQL Error: SQLCODE=-286,
SQLSTATE=42727, SQLERRMC=8192;QARUSER, DRIVER=4.18.60

Resolve the error by adding a temporary tablespace to the primary database.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 117
Related Information

Add a Temporary Tablespace to the Primary Database

6.1.3.9 Specify a Valid DB2 Port Number

Remote sources for DB2 can’t be created if the DB2 port is outside the allowed range of 1-65335.

Context

An invalid port error may be indicated during remote source creation.

For example:

(Catalog) Error reading Remote Object: InternalError:


dberror(CallableStatement.execute): 403 - internal error.: Cannot get remote
source objects: port out of range:70000

Resolve the error by configuring DB2 with a port in the valid range (1-65335).

Procedure

1. Change the DB2 port in the services file.

○ On Windows, C:\Windows\System32\drivers\etc\services
○ On UNIX or Linux, /etc/services
2. Update the database manager configuration.

db2> update database manager configuration using svcename [<service_name> |


<port_number>]

3. Restart the DB2 server.


4. Re-create the remote source.

Administration Guide
118 PUBLIC Troubleshooting and Recovery Operations
6.1.3.10 Verify the DB2 Native Connection Settings

The LogReader may fail to initialize properly when the DB2 native connection fails.

Context

When the native connection fails, you may see the following error in the log:

[ERROR]
com.sap.hana.dp.db2logreaderadapter.DB2RepAgentWrapper.initialize[858] -
Failed to initialize LogReader.
Could not find Resource Bundle containing index: Could not get the log end
locator because: Native database connection failed with code <-1>.

Resolve the error by verifying that the connection details for your DB2 database are configured correctly:

● Host
● Port
● Database Name
● Database Source Name

6.1.3.11 Define the SAP ASE Adapter Interface

Real-time replication tasks on the SAP ASE adapter may fail if an entry for the adapter hasn’t been added to
the interface file of the data server.

Context

An error may be indicated when creating a remote source.

For example:

Error reading Remote Object: Internal Error: dberror(CallableStatement.execute):


403 - internal error: Cannot get remote source objects: Unknown streamType:

Resolve the error by adding an entry for the SAP ASE adapter to the interface file on the SAP ASE server.

Procedure

1. Add the entry to the interface file on the SAP ASE data server.

<entry_name>
master tcp ether <agent_hostname> <port>

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 119
query tcp ether <agent_hostname> <port>

 Note

The entry name must match the adapter instance name specified in the remote source configuration.
The port number must match the SAP ASE adapter server port configured in <DPAgent_root>/
Sybase/interfaces.

2. Re-execute the replication task.

6.1.4 Resolve Improper Adapter Configurations on the Agent

Real-time replication tasks may fail when the associated adapters haven’t been properly configured on the
Data Provisioning Agent.

Provide a Compatible JDBC Driver [page 120]


Replication tasks may fail if the JDBC driver isn’t provided or doesn’t match the version of the source
database.

Specify a Compatible Java Runtime Environment [page 121]


Replication errors may occur if a noncompatible Java Runtime Environment (JRE) is provided.

Configure the DB2 Environment Variables [page 121]


Remote subscriptions may fail to queue if the DB2 runtime environment variables aren’t set on the
Data Provisioning Agent host.

Related Information

6.1.4.1 Provide a Compatible JDBC Driver

Replication tasks may fail if the JDBC driver isn’t provided or doesn’t match the version of the source database.

Context

When a compatible JDBC driver isn’t provided, a replication task failure error may be indicated.

For example:

Error reading Remote Object: InternalError: dberror(CallableStatement.execute):


403 - internal error: Cannot get remote source objects: Failed to install and
start Oracle JDBC driver bundle.

Resolve the error by providing the correct JDBC driver.

Administration Guide
120 PUBLIC Troubleshooting and Recovery Operations
● When available, use the JDBC driver distributed with the source database installation.
For example:
○ For Oracle, $ORACLE_HOME/jdbc/lib/ojdbc7.jar
○ For DB2, $INSTHOME/sqllib/java/db2jcc4.jar
● The JDBC driver version shouldn’t be lower than the source database version.
You can use the java command to verify the JDBC driver version. For example:

java -jar ojdbc7.jar


Oracle 12.1.0.2.0 JDBC 4.1 compiled with JDK7 on Mon_Jun_30_11:30:34_PDT_2014
#Default Connection Properties Resource
#Thu Sep 29 05:37:00 PDT 2016

Procedure

1. Copy the JDBC driver to the <DPAgent_root>/lib directory.


2. Reopen the replication task and add objects from the remote source.

6.1.4.2 Specify a Compatible Java Runtime Environment

Replication errors may occur if a noncompatible Java Runtime Environment (JRE) is provided.

Context

We recommend that you use the SAP JVM bundled with the Data Provisioning Agent.

For complete information about supported Java Runtime Environment versions, see the Product Availability
Matrix (PAM).

6.1.4.3 Configure the DB2 Environment Variables

Remote subscriptions may fail to queue if the DB2 runtime environment variables aren’t set on the Data
Provisioning Agent host.

Context

A replication failure may be indicated in the <DPAgent_root>/log/framework.trc log file.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 121
For example:

[INFO ]
com.sap.hana.dp.db2logreaderadapter.DB2RepAgentWrapper.initialize[1186] -
Initializing LogReader ...
[ERROR]
com.sap.hana.dp.adapter.framework.core.WorkerThread.processRequest[354] - /
rqac48lnx3_work1/dpfiles/dataprovagent/LogReader/lib/linux64/libsybrauni98.so:
libdb2.so.1: cannot open shared object file: No such file or directory Context:
java.lang.UnsatisfiedLinkError: /rqac48lnx3_work1/dpfiles/dataprovagent/
LogReader/lib/linux64/libsybrauni98.so: libdb2.so.1: cannot open shared object
file: No such file or directory

Resolve the error by setting the DB2 runtime environment variables before starting the Data Provisioning
Agent.

Procedure

1. Source the DB2 profile in the <DPAgent_root>/bin/DPAgent_env.sh file.


For example, add a line such as the following:

source /home/db2inst1/sqllib/db2profile

 Tip

By sourcing the profile in the DPAgent_env.sh file, the environment variables are set each time you
use the Data Provisioning Agent Configuration tool.

2. Restart the agent with the Data Provisioning Agent Configuration tool.
3. Re-execute the replication task.

6.1.5 Resolve Uncommitted Source Database Transactions

Real-time replication tasks may fail when attempting to queue a remote subscription if the source database or
table has uncommitted transactions.

Context

A replication task error may be indicated.

For example:

SQL ERROR--
Message: ORA-00054: resource busy and acquire with NOWAIT specified or timeout
expired
SQLState: 61000
Remote Code: 54
Message: SQL execution failed

Administration Guide
122 PUBLIC Troubleshooting and Recovery Operations
Resolve the error by committing any transactions in the source database or table.

Procedure

1. Ensure that any transactions have been committed in the source database or table.
2. Re-execute the replication task.
For trigger-based adapters such as SAP HANA or Teradata, reset the remote subscription.

6.1.6 Resolve Log Reader Instance Port Conflicts

Log reader adapters require an instance port that must not be used by any other applications on the Data
Provisioning Agent host.

Context

Log reader adapter instances can’t be created when the specified instance port is already in use.

Procedure

1. Suspend any remote sources.


2. Edit the remote source configuration and specify an instance port that isn’t in use on the agent host.
3. Resume any remote sources.
4. In the Data Provisioning Remote Subscription Monitor, process any remote subscriptions by choosing Retry
Operations.

Related Information

Monitoring Remote Subscriptions [page 15]


Suspend and Resume Remote Sources [page 57]

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 123
6.1.7 Resolve Data Provisioning Server Timeouts

When the default message timeout value for the Data Provisioning Server is too short, real-time replication
tasks may fail during the initial load operation.

Context

The data provisioning adapter used by the replication task reports a timeout error.

For example:

Error: (256, 'sql processing error: QUEUE: SUB_T002: Failed to add subscription
for remote subscription SUB_T002[id = 165727] in remote source
MSSQLECCAdapterSrc[id = 165373]. Error: exception 151050: CDC add subscription
failed: Request timed out.\n\n: line 1 col 1 (at pos 0)')

The error can be resolved by increasing the message timeout value or cleaning up the source database archive
log.

Procedure

● Increase the timeout value by adjusting the messageTimeout value.

ALTER SYSTEM ALTER CONFIGURATION ('dpserver.ini','SYSTEM') SET ('framework',


'messageTimeout')= '9999' WITH RECONFIGURE;

● Clean the source database archive log.

Related Information

Cleaning Up LogReader Archives [page 99]

Administration Guide
124 PUBLIC Troubleshooting and Recovery Operations
6.1.8 Load Clustered and Pooled Table Metadata into SAP
HANA

Real-time replication tasks with clustered or pooled tables may fail when metadata hasn’t been correctly
loaded into the SAP HANA database.

Context

Missing metadata is indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[ERROR] com.sap.hana.dp.cdcadaptercommons.StatefulAdapterCDC
$CDCOpened.addSubscription[791] - [OracleECCAdapterSrc] Failed to add the
first subscription. SubscriptionSpecification [header=remoteTableId=68,
remoteTriggerId=160778, sql=, subscription=, customId=, seqID=SequenceId
[value=[0, 0, 0, 0]], isLastSubscription=true, withSchemaChanges=false,
firstSubscriptionOnTable=true, lastSubscriptionOnTable=true]
java.lang.NullPointerException: while trying to invoke the method
com.sap.hana.dp.adapter.sdk.parser.Query.getFromClause() of a null object loaded
from local variable 'query'

Resolve the error by replicating the metadata into SAP HANA.

 Note

Beginning with the ABAP Platform 1808/1809 release, cluster and pooled tables aren’t supported in SAP
S/4HANA HANA.

Procedure

1. Edit the dictionary replication script with the name of the remote source and the SAP HANA schema name.
By default, the dictionary replication script is located at <DPAgent_root>/LogReader/scripts/
replicate_dictionary.sql.
2. Execute the dictionary replication script in the SAP HANA database.
3. Re-execute the failed replication task.

6.2 Recovering from Replication Failures

Replication tasks may fail and generate remote source or subscription exceptions for a number of reasons.

Generally, recovering replication involves processing any exceptions in addition to other tasks.

Check for Log Reader Errors [page 126]

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 125
Replication may stop if there are log reader errors.

Recover from a Source Table DDL Schema Change [page 127]


Table replication may fail when there’s a DDL schema change in the source table and the replication
task isn’t configured to replicate with structure.

Recover from a Truncated Source Table [page 128]


A replication task may fail when a source table is truncated.

Recover from Source Table and Replication Task Recreation [page 128]
Restore replication after a failure when a source table and replication task are both re-created in a short
timeframe.

Recover from a Source and Target Data Mismatch [page 129]


Replication may fail when a new row is inserted in a source table that has a primary key and the target
table already contains the row.

Recover from Data Inconsistencies [page 130]


After a replication task has failed, you can use the lost data tracker in the Data Provisioning Agent
command-line configuration tool to identify and correct data inconsistencies that may have occurred.

Recover from an Agent Communication Issue [page 132]


Process any remote subscription exceptions and restore replication when a network interruption or
other communication issue occurs between the Data Provisioning Agent and the SAP HANA server.

Resolve Stopped or Delayed Replication on Oracle [page 133]


If replication from an Oracle source system is stopped or delayed, or if you notice poor performance,
check the instance log for reports of unsupported transactions.

Resolve Locked SAP HANA Source Tables [page 134]


A replication task or flowgraph may fail if an SAP HANA source table is locked and the remote
subscription request times out.

Reset the Remote Subscription [page 134]


When log reader-based real-time replication has stopped, you can try resetting the remote
subscription.

Clear Remote Subscription Exceptions [page 135]


Replication may be stopped due to remote subscription exceptions. For example, a remote
subscription exception may be generated when a primary key violation, constraint violation, or null
primary key occurs.

Related Information

6.2.1 Check for Log Reader Errors

Replication may stop if there are log reader errors.

For more information about log reader errors, check the trace file located in the log folder of your Data
Provisioning Agent instance. You may also choose to increase the trace log levels for the Data Provisioning
Server.

Administration Guide
126 PUBLIC Troubleshooting and Recovery Operations
Related Information

Activate Additional Trace Logging for the Data Provisioning Server [page 148]

6.2.2 Recover from a Source Table DDL Schema Change

Table replication may fail when there’s a DDL schema change in the source table and the replication task isn’t
configured to replicate with structure.

Context

When table replication fails, remote source and remote subscription exceptions may be generated.

Recover replication by processing any exceptions and recreating the replication task.

Procedure

1. In the Data Provisioning Remote Subscription Monitor, verify any reported exceptions.
Schema change exceptions on the failed replication task can’t be ignored or retried.
2. In the SAP HANA Web-based Development Workbench editor, drop the failed replication task.
3. Re-create the replication task with the “Initial + realtime with structure” replication behavior.
4. Activate and execute the new replication task.
5. In the Data Provisioning Remote Subscription Monitor, ignore the schema change exception.

Related Information

Create a Replication Task


Save and Execute a Replication Task

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 127
6.2.3 Recover from a Truncated Source Table

A replication task may fail when a source table is truncated.

Context

Restore replication by truncating the target table and processing any remote subscription exceptions.

Procedure

1. In the Data Provisioning Remote Subscription Monitor, verify any remote subscription errors.
2. Truncate the target table in the SAP HANA system to restore data consistency between the source and
target tables.
3. In the Data Provisioning Remote Subscription Monitor, ignore the TRUNCATE TABLE errors.

Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]

6.2.4 Recover from Source Table and Replication Task


Recreation

Restore replication after a failure when a source table and replication task are both re-created in a short
timeframe.

Context

A replication failure may occur under the following conditions:

● There are multiple active remote subscriptions to the same source database.
● There’s a significant amount of change data on the source table or other subscribed tables that haven't yet
been replicated to the SAP HANA target
● The remote subscription or replication task is dropped, and the source table and replication task are re-
created in a short timeframe.

Under these circumstances, Log Reader adapters may capture the DROP TABLE DDL and attempt to replicate
it to the SAP HANA target, generating an exception.

Administration Guide
128 PUBLIC Troubleshooting and Recovery Operations
Restore replication by processing any remote source and subscription exceptions.

Procedure

1. In the Data Provisioning Remote Subscription Monitor, check for remote subscription errors.
2. Restore replication by selecting the exception and choosing Ignore.

Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]

6.2.5 Recover from a Source and Target Data Mismatch

Replication may fail when a new row is inserted in a source table that has a primary key and the target table
already contains the row.

 Note

Updating or deleting source table rows that don’t exist in the target table doesn’t cause a replication failure.

Context

Recover replication by processing any remote subscription exceptions and re-executing the replication task.

Procedure

1. In the Data Provisioning Remote Subscription Monitor, identify any errors.


2. Select the exception and choose Ignore to restore replication.
3. (Optional) Re-execute the replication task to avoid any data mismatch between the source and target
tables.

Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 129
6.2.6 Recover from Data Inconsistencies

After a replication task has failed, you can use the lost data tracker in the Data Provisioning Agent command-
line configuration tool to identify and correct data inconsistencies that may have occurred.

Prerequisites

Before attempting to recover from data inconsistencies, ensure that you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for details.
Place the files in the <DPAgent_root>/lib folder.

Context

The tool tracks data inconsistencies by using a specified column as a partition in both the source and target
tables, and then comparing data row counts for each partition. Inconsistencies between source and target can
be printed as a SQL fix plan, or corrected automatically.

Two fix modes are available:

● row: Inconsistencies are processed row by row.


● block (Default): Inconsistencies are processed by partition block.

Procedure

1. Configure the source and target database connections.


Create an INI file with the hostname, port number, database name, username, and password for each
database in the following format:

source_db_host=<hostname>
source_db_port=<port_number>
source_db_name=<database_name>
source_db_user=<username>
source_db_pswd=<password>
target_db_host=<hostname>
target_db_port=<port_number>
target_db_name=<database_name>
target_db_user=<username>
target_db_pswd=<password>

 Tip

Save the configuration file as <DPAgent_root>\bin\DBConfig.ini to define the default


configuration for the lost data tracker tool.

2. At the command line, navigate to <DPAgent_root>\bin.

Administration Guide
130 PUBLIC Troubleshooting and Recovery Operations
3. Start the configuration tool with the --LostDataTracker parameter.

○ On Windows, agentcli.bat --LostDataTracker


○ On Linux, agentcli.sh --LostDataTracker
4. Choose Track Lost Data Configuration.
5. As prompted, specify configuration parameters for partitioning and tracking depth.
Press Enter to accept the default values or skip optional parameters.

○ Source table name


○ Column name to use for partition blocks.
For example, a date/time column.
○ Target table name
○ Column name to use as a primary key for row-mode fixes
○ Path to the database configuration file

 Tip

By default, the tool looks for the configuration file at <DPAgent_root>/bin/DBConfig.ini.

○ Difference threshold for partition blocks


If the difference between the source and target partitions is greater than the threshold, inconsistency
tracking continues into the partition.
○ Maximum row scan count
When the source and target partition row counts match, and the number of rows is less than the
Maximum scan row count, the partition is skipped for further tracking.
6. As prompted, specify configuration parameters for inconsistency tracking and data fixing.

○ Tracking mode
○ row: Inconsistencies are processed row by row.
○ block: Inconsistencies are processed by partition block.
Default: block
○ Whether to generate a SQL fix plan for any inconsistencies
Default: yes
○ Whether to automatically fix any inconsistencies
Default: yes
○ Virtual table name to use when processing fixes

Next Steps

After specifying all configuration parameters, the lost data tracker automatically starts comparing the source
and target tables.

If a SQL fix plan was requested, a SQL command for each fix is displayed.

If automatic inconsistency correction was specified, the fix for each inconsistency detected is automatically
applied to the target table.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 131
Related Information

SAP HANA smart data integration and all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0

6.2.7 Recover from an Agent Communication Issue

Process any remote subscription exceptions and restore replication when a network interruption or other
communication issue occurs between the Data Provisioning Agent and the SAP HANA server.

Context

A communication issue may generate remote subscription exceptions such as the following:

● Connection to the agent has been lost.


● Connection to all the agents in the cluster has been lost.
● One or more subscriptions failed for the remote source.

In some scenarios, no exceptions are generated, but changed source data isn’t replicated to the target. In these
scenarios, an error message may be indicated in the Data Provisioning Server trace log.

For example:

Generic stream error: getsockopt, Event=EPOLLERR - , rc=111: Connection refused


$NetworkChannelBase$=
92 [0x00007f054cdb1798] {refCnt=4, idx=4} 10.173.160.203/0_tcp-
>10.173.160.203/8780_tcp ConnectWait,[r--c]
exception throw location:
1: 0x00007f23a75ec672 in Stream::NetworkChannelCompletionThread::run(void*&)
+0x420 at
NetworkChannelCompletion.cpp:548 (libhdbbasis.so)
2: 0x00007f23a752a047 in Execution::Thread::staticMainImp(void**)+0x743 at
Thread.cpp:463 (libhdbbasis.so)
3: 0x00007f23a752b698 in Execution::Thread::staticMain(void*)+0x34 at
ThreadMain.cpp:26 (libhdbbasis.so)
[58457]{-1}[-1/-1] 2016-11-03 00:33:53.499182 e Stream
NetworkChannelCompletion.cpp(00622) : NetworkChannelCompletionThread #5 91
[0x00007f054cdb1958] {refCnt=4, idx=5} 10.173.160.203/0_tcp-
>10.173.160.203/8780_tcp ConnectWait,[r--c]
: Error in asynchronous stream event: exception 1: no.2110001 (Basis/IO/Stream/
impl/NetworkChannelCompletion.cpp:548)

Procedure

1. Suspend all remote sources on the affected agent.


2. Restart the Data Provisioning Agent.
3. Check whether the status of the agent TCP port is “LISTEN” or “ESTABLISHED”.

Administration Guide
132 PUBLIC Troubleshooting and Recovery Operations
○ On Windows, netstat -na | findstr "5050"
○ On Linux, netstat -na|grep 5050

Additionally, verify that no operating system firewall rules are configured for the agent TCP port.
4. Restart the Data Provisioning Server on the SAP HANA system.
5. Resume all sources on the affected agent.
6. In the Data Provisioning Remote Subscription Monitor, retry or ignore any remaining remote subscription
exceptions.
7. Verify the remote source and remote subscription statuses.

○ Remote source CDC status: OK


○ Remote subscription state: APPLY_CHANGE_DATA
8. Check for data consistency by comparing the source and target table row counts.

Related Information

Manage the Agent Service


Operations on Services (SAP HANA Administration Guide)

6.2.8 Resolve Stopped or Delayed Replication on Oracle

If replication from an Oracle source system is stopped or delayed, or if you notice poor performance, check the
instance log for reports of unsupported transactions.

Context

When the Oracle LogMiner starts scanning from the middle of a transaction and fails to translate the raw
record, it reports an unsupported operation. This unsupported operation occurs most often on UPDATE
operations involving wide tables.

The Oracle Log Reader adapter can manage these records by using a standby scanner, but frequent
occurrence of unsupported operations can slow scan performance.

You have several options to reduce the number of unsupported operations.

● Upgrade to a newer version of the Data Provisioning Agent, if possible.


● If parallel scanning is enabled, increase the value of the Parallel scan SCN range parameter.
● Disable parallel scanning.

Related Information

Data Provisioning Agent Log Files and Scripts [page 142]

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 133
6.2.9 Resolve Locked SAP HANA Source Tables

A replication task or flowgraph may fail if an SAP HANA source table is locked and the remote subscription
request times out.

Context

A replication task or flowgraph that has timed out due to a locked table may fail with the following error:

SAP DBTech JDBC: [256]: sql processing error: QUEUE: RT_SFC100: Failed to add
subscription for remote subscription RT_SFC100.Error: exception 151050: CDC add
subscription failed: Request timed out.

Additionally, you can verify a locked table by checking the blocked transactions in the SAP HANA source.

To resolve a locked SAP HANA source table, refer to SAP Note 0001999998 .

6.2.10 Reset the Remote Subscription

When log reader-based real-time replication has stopped, you can try resetting the remote subscription.

Procedure

1. From a SQL console, reset the remote subscription.

ALTER REMOTE SUBSCRIPTION <subscription_name> RESET;

 Tip

If you have the required privileges, you can also reset remote subscriptions from the Data Provisioning
Remote Subscription Monitor. For more information, see Manage Remote Subscriptions.

2. Check the Data Provisioning Agent for the adapter instance.


In <DPAgent_root>/LogReader, look for a directory with the same name as your remote subscription
instance. Back up and delete the folder, if it exists.
3. Execute the cleanup script for your database type.
The cleanup scripts are located in <DPAgent_root>/LogReader/scripts.

 Note

Execute the script using the same user that is configured for replication.

4. Stop the SAP HANA Data Provisioning Agent service.

Administration Guide
134 PUBLIC Troubleshooting and Recovery Operations
○ On Windows, use the Services manager in Control Panel.
○ On Linux, run ./dpagent_service.sh stop.
5. (Optional) Enable additional logging on the Data Provisioning Agent.
a. Open the Agent configuration file in a text editor.

The configuration file is located at <DPAgent_root>/dpagentconfig.ini.


b. Change the parameter framework.log.level from INFO to ALL.

Increasing the log level generates additional information useful for further debugging, if necessary. You can
safely revert the log level after resolving the issue.
6. Restart the SAP HANA Data Provisioning Agent service.

○ On Windows, use the Services manager in Control Panel.


○ On Linux, run ./dpagent_service.sh start.
7. From a SQL console, queue and distribute the remote subscription.

ALTER REMOTE SUBSCRIPTION <subscription_name> QUEUE

ALTER REMOTE SUBSCRIPTION <subscription_name> DISTRIBUTE

 Tip

If you have the required privileges, you can also queue and distribute remote subscriptions from the
Data Provisioning Remote Subscription Monitor. For more information, see Manage Remote
Subscriptions.

Next Steps

If you’re unable to reset the remote subscription, you may need to clear any outstanding remote subscription
exceptions first.

Related Information

Clear Remote Subscription Exceptions [page 135]


Manage Remote Subscriptions [page 59]

6.2.11 Clear Remote Subscription Exceptions

Replication may be stopped due to remote subscription exceptions. For example, a remote subscription
exception may be generated when a primary key violation, constraint violation, or null primary key occurs.

Remote subscription exceptions are reported in multiple ways.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 135
● Remote subscription exceptions appear in the task execution log.
● E-mail alerts may be sent to the administrator.
● Exceptions are displayed in monitoring, or when you query the remote subscription exception public view.

After correcting the root cause of the remote subscription exception, you can clear the exceptions with a SQL
statement:

PROCESS REMOTE SUBSCRIPTION EXCEPTION <subscription_id> IGNORE

6.3 Recovering from Crashes and Unplanned System


Outages

Resume replication and ensure data consistency when you experience a crash or unplanned system outage.

Recover from an Index Server Crash [page 137]


Verify your data consistency when you experience an SAP HANA index server crash.

Recover from a Data Provisioning Server Crash [page 137]


Restart replication when the Data Provisioning Server crashes.

Recover from a Data Provisioning Agent JVM Crash [page 138]


The Data Provisioning Agent may crash if the Java virtual machine is configured with insufficient
available memory.

Recover from an Unplanned Source Database Outage [page 139]


Replication may stop if there’s an unplanned source database outage due to a network or hardware
issue.

Recover from an SAP ASE Adapter Factory Crash [page 140]


The dpadapterfactory process used in the SAP ASE and SAP ECC ASE adapters restarts
automatically if it’s stopped or crashes.

Related Information

Administration Guide
136 PUBLIC Troubleshooting and Recovery Operations
6.3.1 Recover from an Index Server Crash

Verify your data consistency when you experience an SAP HANA index server crash.

Procedure

1. After SAP HANA restarts, verify that all SAP HANA processes show an active GREEN status.
2. Compare source and target table row counts to verify data consistency.

6.3.2 Recover from a Data Provisioning Server Crash

Restart replication when the Data Provisioning Server crashes.

Context

When the Data Provisioning Server crashes, an error may be indicated in the usr/sap/
<sid>HDB<instance>/<hana_machine_name>/trace/dpserver_alert_<hana_machine_name>.trc
log file.

For example:

Instance LR1/00, OS Linux lsdub0007 3.0.101-80-default #1 SMP Fri Jul 15


14:30:41 UTC 2016 (eb2ba81) x86_64
----> Register Dump <----
rax: 0x00007f638b068a90 rbx: 0x00007f5fa74a8010
rcx: 0x0000000000000000 rdx: 0x0000000000000056
......
xmm[14]: 0x00000000.3fffffff.3fffffff.00000000
xmm[15]: 0x3fffffff.00000000.03020100.07060504
NOTE: full crash dump will be written to /usr/sap/LR1/HDB00/lsdub0007/trace/
dpserver_lsdub0007.30011.crashdump.20161102-143544.019227.trc
Call stack of crashing context:
1: 0x0000000000000000 in <no symbol>+0x0 (<unknown>)

Recover from the crash by restarting replication and processing any remote subscription exceptions.

Procedure

1. Suspend all remote sources.


2. Restart the Data Provisioning Server.
3. Restart the SAP HANA system.
4. Resume all remote sources.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 137
5. Process any remote subscription exceptions remaining after change data capture has resumed.

Next Steps

In a multiple container SAP HANA configuration, the Data Provisioning Server is managed individually in each
tenant database. Repeat the recovery process for each tenant database.

Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]


Suspend and Resume Remote Sources [page 57]

6.3.3 Recover from a Data Provisioning Agent JVM Crash

The Data Provisioning Agent may crash if the Java virtual machine is configured with insufficient available
memory.

Context

An error may be indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[ERROR] OracleRepAgentWrapper$2.run - LogReader is not in REPLICATE state.


State: REPLICATION DOWN.
[ERROR] ReceiverImpl.sendError - LogReader is in ERROR state [GC overhead limit
exceeded]. Check LogReader log for details.
[ERROR] ReceiverImpl.sendError - Receiver is closed due to adapter error.Please
check REMOTE_SUBSCRIPTION_EXCEPTIONS table

Recover by adjusting the agent's maximum available memory and restarting the agent.

Procedure

1. Check the available memory on the Data Provisioning Agent host machine.

For example:

# free -m
total used free shared buffers cached
Mem: 258313 196343 61970 0 589 53022

Administration Guide
138 PUBLIC Troubleshooting and Recovery Operations
-/+ buffers/cache: 142731 115582
Swap: 2047 0 2047

2. Stop the Data Provisioning Agent service.


3. Adjust the agent maximum available memory in the agent runtime options.
4. Restart the Data Provisioning Agent service.

Related Information

Manage the Agent Service


Manage the Agent Service [Command Line]
Agent Runtime Options

6.3.4 Recover from an Unplanned Source Database Outage

Replication may stop if there’s an unplanned source database outage due to a network or hardware issue.

 Note

On Oracle, the Oracle Log Reader adapter automatically reconnects to the source database when available
and resumes replication without any additional actions.

Context

A source database outage may be indicated by multiple error types in the logs.

For example:

Could not connect to <jdbc:oracle:thin:@host:1675:DDCSTD>: The Network Adapter


could not establish the connection

java.lang.Exception: Log scanner <LogMinerScanner_1> stopped because of error:


No more data to read from socket

Recover by processing remote subscription exceptions and resuming replication on the remote source.

Procedure

1. Suspend the remote source until the outage is resolved.


2. Process any remote subscription exceptions.
3. Resume replication on the remote source.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 139
Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]


Suspend and Resume Remote Sources [page 57]

6.3.5 Recover from an SAP ASE Adapter Factory Crash

The dpadapterfactory process used in the SAP ASE and SAP ECC ASE adapters restarts automatically if it’s
stopped or crashes.

Context

An adapter factory crash may be indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[ERROR] com.sap.hana.dp.adapter.framework.cpp.CppAdapterManager.read[236] -
Socket closed
com.sap.hana.dp.adapter.sdk.AdapterException: Socket closed
at
com.sap.hana.dp.adapter.framework.socket.SocketConnector.read(SocketConnector.jav
a:103)
at
com.sap.hana.dp.adapter.framework.cpp.CppAdapterManager.read(CppAdapterManager.ja
va:232)

Additional errors may be indicated in the <DPAgent_root>/log/


dpsadapterfactory_<##########>.txt log file.

For example:

e Basis FaultProtectionImpl.cpp(01610) + Instance (none)/(none), OS Linux


rqahana3 3.0.101-0.46-default #1 SMP Wed Dec 17 11:04:10 UTC 2014 (8356111)
x86_64
e Basis FaultProtectionImpl.cpp(01610) + ----> Register Dump <----
e Basis FaultProtectionImpl.cpp(01610) + rax: 0xfffffffffffffffc rbx:
0x00007f54e1adaa20
e Basis FaultProtectionImpl.cpp(01610) + rcx: 0xffffffffffffffff rdx:
0x0000000000000000
……
Helper.cpp(00514) : Using 'x64_64 ABI unwind' for stack tracing
e Basis FaultProtectionImpl.cpp(01610) + NOTE: full crash dump will be written
to /rqahana3_work2/songp/DPAgent_1.3.0_5030/log/dpsadapterfactory.crashdump.
20161020-233253.081342.trc

A full crash dump is written to <DPAgent_root>/log/


dpsadapterfactory.crashdump.<timestamp>.trc

Resolve the errors by processing any remote exceptions.

Administration Guide
140 PUBLIC Troubleshooting and Recovery Operations
Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]

6.4 Troubleshooting Data Provisioning Agent Issues

This section describes error situations related to the Data Provisioning Agent and their solutions.

Data Provisioning Agent Log Files and Scripts [page 142]


Various log files are available to monitor and troubleshoot the Data Provisioning Agent.

Clean an Agent Started by the Root User [page 142]


If the Data Provisioning Agent is accidentally started by the root user, you should clean up the agent
installation.

Agent JVM Out of Memory [page 143]


If the Java Virtual Machine on the Data Provisioning Agent is running out of memory, you may need to
adjust configuration parameters on the Agent or your adapters.

Adapter Prefetch Times Out [page 144]


When task execution fails due to an adapter timeout, you may need to adjust the adapter prefetch
timeout parameter.

Agent Reports Errors When Stopping or Starting [page 145]


If the Data Provisioning Agent reports error code 5 when stopping or starting on Windows, you may
need to run the configuration tool as an administrator.

Uninstalled Agent Reports Alerts or Exceptions [page 145]


After uninstalling the Data Provisioning Agent and installing a new version, alerts and exceptions may
be reported for the previous installation.

Create an Agent System Dump [page 145]


Use the command-line agent configuration tool to generate system dumps when troubleshooting the
Data Provisioning Agent.

Resolve Agent Parameters that Exceed JVM Capabilities [page 147]


Replication tasks may fail during the initial load if the fetch size exceeds the capabilities of the Data
Provisioning Agent JVM.

Related Information

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 141
6.4.1 Data Provisioning Agent Log Files and Scripts

Various log files are available to monitor and troubleshoot the Data Provisioning Agent.

The following log files are available:

Log File Name and Location Description

<DPAgent_root>/log/ Data Provisioning Agent event log


dpagent_service_eventlog_<date>.log

<DPAgent_root>/log/framework_alert.trc Data Provisioning Agent adapter framework log. Use this file
to monitor data provisioning agent statistics.

<DPAgent_root>/log/framework.trc Data Provisioning Agent adapter framework trace log. Use


this file to trace and debug data provisioning agent issues.

<DPAgent_root>/log/configtool.log Data Provisioning Agent Configuration Tool log

<DPAgent_root>/LogReader/<instance>/log/ Instance log for log reader-based adapters


<instance>.log

<DPAgent_root>/LogReader/admin_logs/ Administrative log for log reader-based adapters


admin<timestamp>.log

Additionally, scripts for performing source database initialization and cleanup operations can be found at
<DPAgent_root>/LogReader/scripts.

6.4.2 Clean an Agent Started by the Root User

If the Data Provisioning Agent is accidentally started by the root user, you should clean up the agent
installation. In a normal configuration, the Data Provisioning Agent should be started only by a user with the
same rights and permissions as the installation owner defined during the agent installation.

Context

If the agent is started by the root user, additional files are created in the installation location. The agent can’t
access these additional files because root is their owner, and they should be removed.

 Note

This applies only if the agent was started by the user named root, and not other users that may belong to
the root group or have similar permissions.

Administration Guide
142 PUBLIC Troubleshooting and Recovery Operations
Procedure

1. Navigate to the configuration directory in the agent installation location.

For example, /usr/sap/dataprovagent/configuration


2. Remove the following directories:

○ com.sap.hana.dp.adapterframework
○ org.eclipse.core.runtime
○ org.eclipse.osgi

 Caution

Don’t remove the config.ini file or org.eclipse.equinox.simpleconfigurator directory.

3. Remove the log directory and secure_storage file.

For example, /usr/sap/dataprovagent/log and /usr/sap/dataprovagent/secure_storage.

 Note

Depending on the permissions, you may require sudo access to remove the secure_storage file.

For example, sudo rm -rf secure_storage.

6.4.3 Agent JVM Out of Memory

If the Java Virtual Machine on the Data Provisioning Agent is running out of memory, you may need to adjust
configuration parameters on the Agent or your adapters.

Oracle Log Reader Adapters

For Oracle log reader adapters, there are multiple possible solutions.

● Check the Queue size of parallel scan tasks parameter.


The default value is 0, which allows for an unlimited queue size. To find the maximum queue size that
results in acceptable memory use, set the queue size to 10,000 and increase in increments of 50,000 or
100,000.
● Check the Parallel scan SCN range parameter.
Decrease the value as necessary.
● Check the Number of parallel scanners parameter.
Decrease the value from 4 to 2.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 143
Setting Virtual Memory

For general memory issues, you can increase the JVM memory usage.

In dpagent.ini, change the memory allocation parameter, and restart the Agent.

For example, to increase the memory to 8 GB, enter -Xmx8096m.

Memory Dump to a File Can Cause a Disk Full Error

The dpagent.ini file contains a parameter (-XX:+HeapDumpOnOutOfMemoryError) that allows you to


perform a full memory dump, and the size of this dump is based on the -xmx parameter, which is also in the
dpagent.ini file. For example, if you set the -xmx parameter to 64 GB, the file created during the dump will
be 64 GB.

You can remove the -XX:+HeapDumpOnOutOfMemoryError parameter, or you can reduce the amount of
space that the dump file will take up, to help avoid any “disk full” errors.

However, you should add the line back in, if needed, when you are attempting to debug out-of-memory errors
of the Data Provisioning Agent. You would also need to ensure that you have enough disk space for the dump
file.

6.4.4 Adapter Prefetch Times Out

When task execution fails due to an adapter timeout, you may need to adjust the adapter prefetch timeout
parameter.

In the Index Server trace log, you may notice an error similar to the following:

DPAdapterAccess.cpp(01825) : DPAdapterAccess::Fetch: failed with error: Prefetch


timed out.

When this type of error appears, you can adjust the value of the adapter prefetch timeout parameter.

From a SQL console:

ALTER SYSTEM ALTER CONFIGURATION ('dpserver.ini','SYSTEM')


SET ('framework','prefetchTimeout')='<seconds>' WITH RECONFIGURE;

For example, you might change the value to 600 to set the timeout to 10 minutes.

Administration Guide
144 PUBLIC Troubleshooting and Recovery Operations
6.4.5 Agent Reports Errors When Stopping or Starting

If the Data Provisioning Agent reports error code 5 when stopping or starting on Windows, you may need to run
the configuration tool as an administrator.

To run the configuration tool once as an administrator, right-click dpagentconfigtool.exe and choose Run
as administrator.

Alternatively, you can set the configuration tool to always run as an administrator. Right-click
dpagentconfigtool.exe and choose Properties. From the Compatibility tab, choose Run this program as an
administrator and click OK.

6.4.6 Uninstalled Agent Reports Alerts or Exceptions

After uninstalling the Data Provisioning Agent and installing a new version, alerts and exceptions may be
reported for the previous installation.

If the agent is uninstalled, but associated remote sources and the agent registration aren’t dropped, any
replication tasks still using remote sources on the previous agent may report alerts and exceptions.

Solution

Drop outdated remote sources and the agent registration referring to the old installation.

To suspend and drop a remote source:

ALTER REMOTE SOURCE "<remote_source_name>" SUSPEND CAPTURE;


DROP REMOTE SOURCE "<remote_source_name>" CASCADE;

To drop an agent registration:

DROP AGENT "<agent_name>" CASCADE;

If you don’t know the old agent name, you can find it using SELECT * FROM sys.m_agents.

6.4.7 Create an Agent System Dump

Use the command-line agent configuration tool to generate system dumps when troubleshooting the Data
Provisioning Agent.

Context

By default, the configuration tool generates system dumps that include the following information:

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 145
● Log reader, framework, and OSGi logs and traces
● Information about running Java and Data Provisioning Agent processes
● Information about JVM, threads, OSGi, and adapters, if the agent connection is available

 Note

To export a system dump, the Data Provisioning Agent must be started and running.

Procedure

1. At the command line, navigate to <DPAgent_root>/bin.


2. Execute the command using the --createFullSystemDump parameter.

○ On Windows, agentcli.bat --createFullSystemDump [<additional_parameters>]


○ On Linux, ./agentcli.sh --createFullSystemDump [<additional_parameters>]

Table 30: Supported Parameters


Parameter Description

–-snapshotName <name> Specifies the name for the exported dump.

By default, the exported filename is


<hostname>_<time>_<name>.

If no snapshot name is specified, “dpagent” is used as the


name.

--runtimeOnly Restricts the exported dump to runtime information only.

The exported dump doesn't include a copy of the logs or


other folders.

Related Information

Manage the Agent Service [Command Line]

Administration Guide
146 PUBLIC Troubleshooting and Recovery Operations
6.4.8 Resolve Agent Parameters that Exceed JVM
Capabilities

Replication tasks may fail during the initial load if the fetch size exceeds the capabilities of the Data
Provisioning Agent JVM.

Context

An error may be indicated during the initial load.

For example:

Could not execute 'SELECT * FROM "SYSTEM"."V_ZMTEST_DPSIMPLE"' in 10:00.197


minutes .
SAP DBTech JDBC: [403]: internal error: Error opening the cursor for the remote
database Prefetch timed out. for query "SELECT "V_ZMTEST_DPSIMPLE"."COL1",
"V_ZMTEST_DPSIMPLE"."COL2", "V_ZMTEST_DPSIMPLE"."COL3",
"V_ZMTEST_DPSIMPLE"."COL4", "V_ZMTEST_DPSIMPLE"."COL5",
"V_ZMTEST_DPSIMPLE"."COL6", "V_ZMTEST_DPSIMPLE"."COL7" FROM "QARUSER.DPTEST01"
"V_ZMTEST_DPSIMPLE" "

In addition, an error may be indicated in the <DPAgent_root>/log/framework.trc log file.

For example:

[FATAL] DPFramework | WorkerThread.processRequest - Java heap space (failed to


allocate 40000016 bytes) (max heap: 256 MB)[FATAL] DPFramework |
WorkerThread.processRequest - Java heap space (failed to allocate 40000016
bytes) (max heap: 256 MB)

Resolve the errors by reducing the row fetch size and increasing the JVM maximum size.

Procedure

1. Drop all replication tasks.


2. Increase the row fetch size in the Agent Preferences.
3. Increase the maximum virtual machine size in the Agent Runtime Options.
4. Restart the Data Provisioning Agent.
5. Process any remote subscription exceptions.

Related Information

Processing Remote Source or Remote Subscription Exceptions [page 61]


Manage the Agent Service
Agent Adapter Framework Preferences

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 147
Agent Runtime Options

6.5 Troubleshooting Other Issues

This section describes various issues unrelated to replication failures or the Data Provisioning Agent and their
solutions.

Activate Additional Trace Logging for the Data Provisioning Server [page 148]
The trace logs contain detailed information about actions in the Data Provisioning server.

Resolve a Source and Target Data Mismatch [page 150]


If data in the source and target tables of a replication task is mismatched, fix the error by re-executing
the replication task.

Configuring the Operation Cache [page 151]


You can improve performance by using an operation cache in some script servers.

Ensure Workload Management and Resource Consumption [page 152]


Ensure that in circumstances of limited memory or CPU resources that processes and system
resources remain responsive.

Related Information

6.5.1 Activate Additional Trace Logging for the Data


Provisioning Server

The trace logs contain detailed information about actions in the Data Provisioning server.When you’re
troubleshooting issues, you can gain additional insight about the root cause by increasing the level of detail in
the logs.

Prerequisites

To configure traces, you must have the system privilege TRACE ADMIN.

Procedure

1. Log on to SAP HANA studio with a user that has system privilege TRACE ADMIN.

Administration Guide
148 PUBLIC Troubleshooting and Recovery Operations
2. In the Administration editor, choose the Trace Configuration tab.
3. Choose Edit Configuration under “Database Trace”.
4. Filter the component list for dp*.
5. Set the System Trace Level to DEBUG or for any relevant components under the DPSERVER and
INDEXSERVER nodes.

Scenario Traces

Debugging Initial Load ○ DPSERVER


○ dpframework
Trace information for message handling between the index server, Data
Provisioning Server, and Data Provisioning Agent.
○ dpframeworkprefetch
Prefetch­specific trace information.
○ dpframeworkmessagebody
Trace information useful for diagnosing incorrect numbers of rows or sus­
pected data corruption.

 Note
This option can generate a large amount of data in the trace log.

○ INDEXSERVER
○ dpadaptermanager
Trace information for initial smart data access requests.
○ XSENGINE
○ dpadaptermanager
Trace information for initial smart data access requests from the SAP
HANA Web-based Development Workbench.

Debugging Real-time Replica­ ○ DPSERVER


tion ○ dpframework
Trace information for message handling between the index server, Data
Provisioning Server, and Data Provisioning Agent.
○ dpreceiver
Trace information for message receiver, including the transfer of row data
received by the framework.
○ dpdistributor
Trace information for the message distributor.
○ dpapplier
Trace information for the message applier.
○ dpserver
Trace information for communication between the Data Provisioning
Server and the index server.
○ dpremotesubscriptionmanager
Trace information for remote subscription runtime details on the Data
Provisioning Server side.
○ dpframeworkmessagebody
Trace information useful for diagnosing incorrect numbers of rows or sus­
pected data corruption.

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 149
Scenario Traces

 Note
This option can generate a large amount of data in the trace log.

○ INDEXSERVER
○ dpdistributor
Trace information for the message distributor.
○ dpapplier
Trace information for the message applier.
○ dpremotesubscriptionmanager
Trace information for remote subscription runtime details on the index
server side.

 Tip

To ensure that applicable trace information is captured at the time of the error and not overwritten by
log wrapping, you can increase the maximum log size while capturing the traces. In the GLOBAL node,
increase the value of the maxfilesize parameter.

6. Click Finish.

Results

Additional debug information for the modified components is added to the trace log.

 Tip

You can also modify the trace log levels by issuing commands from a SQL console and manually specifying
the component to adjust.

For example, to set the Data Provisioning Server dpframework trace to DEBUG:

ALTER SYSTEM ALTER CONFIGURATION ('dpserver.ini','SYSTEM') SET


('trace','dpframework') = 'DEBUG' WITH RECONFIGURE;

6.5.2 Resolve a Source and Target Data Mismatch

If data in the source and target tables of a replication task is mismatched, fix the error by re-executing the
replication task.

Administration Guide
150 PUBLIC Troubleshooting and Recovery Operations
6.5.3 Configuring the Operation Cache

You can improve performance by using an operation cache in some script servers.

The operation cache holds operation instances for Global Address Cleanse, Universal Data Cleanse, Geocode,
and Type Identifier (TID), which are initialized and ready for use during task plan execution. This improves
performance by avoiding the process of task plan operation initialization/create and deletion, and allows the
reuse of the cached instances both inside a single plan and across plans.

Having more instances cached improves performance, but those additional cached instances consume more
memory.

Operation cache instances are type­specific and are set in the file scriptserver.ini.

You can use the following monitoring views to verify the cached operations, usage count, and so on.

● Select * from sys.m_caches;


● Select * from sys.m_cache_entries;

The operation cache can be configured in SAP HANA studio by editing the file scriptserver.ini.

Enabling Operation Cache

To start operation cache instances, you must set the enable_adapter_operation_cache parameter to yes. The
following sample SQL statement sets the parameter to yes.

ALTER SYSTEM ALTER CONFIGURATION ('scriptserver.ini','SYSTEM') SET


('adapter_operation_cache', 'enable_adapter_operation_cache') = 'yes' WITH
RECONFIGURE;

Changing the Operation Cache Default Settings

You can turn off the cache for each node, or change the default number of instances. The following sample SQL
statement changes the number of instances.

ALTER SYSTEM ALTER CONFIGURATION ('scriptserver.ini', 'SYSTEM') SET


('adapter_operation_cache', 'gac')='30';

By default, the operation cache is enabled for all the supported types of operations (which is recommended)
and has the following number of instances.

Table 31: Operation Cache Default Settings


Option Number of instances default setting.

Geocode 10

Global Address Cleanse (in the Cleanse node) 20

Universal Data Cleanse (in the Cleanse node) 60

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 151
Option Number of instances default setting.

Type Identification 20

Ensuring That Real-Time Jobs Have Priority

The operation cache is used for both batch jobs and real-time jobs. Batch jobs can exhaust the operation
cache, leaving insufficient resources to optimize the running of real-time jobs. If you run real-time jobs, use
these settings to ensure that a dedicated number of operation cache instances are available for real-time tasks.
By default, the number of instances made available to real-time tasks is half the total number of instances for
each option.

Table 32: Real-Time Default Settings


Option Number of instances default setting

Geocode 5

Global Address Cleanse (in the Cleanse node) 10

Universal Data Cleanse (in the Cleanse node) 30

Type Identification 0

Caching Performance with Global Address Cleanse

Within the Cleanse node, you can configure two options that are relevant to Global Address Cleanse. When
caching is enabled, for better performance set these options as follows:

Table 33: Best Performance Global Address Cleanse Options


Option Recommended setting

Country Identification Mode assign

Default Country NONE

6.5.4 Ensure Workload Management and Resource


Consumption

Ensure that in circumstances of limited memory or CPU resources that processes and system resources
remain responsive.

Information on the specifics and procedures of workload management is found in the SAP HANA
Administration Guide.

SAP HANA smart data integration takes advantage of this framework and allows you to better handle
circumstances of limited resources. Workload management in SAP HANA allows you to optimize for your

Administration Guide
152 PUBLIC Troubleshooting and Recovery Operations
system. This framework also works within the limited memory or CPU resources, as you define them in the
workload class and mapping.

For example, if the workload class sets “STATEMENT THREAD LIMIT = 5”, then SAP HANA creates up to five
instances per node or operation in parallel during the task plan execution.

If the workload class sets “STATEMENT MEMORY LIMIT = 2GB”, but any of the nodes or operation in the task
plan require more than 2 GB of memory, then the task would fail with an error “[MEMORY_LIMIT_VIOLATION]
Information about current memory composite-limit violation”.

Consider these options and constraints to create the best possible performance.

Related Information

Administration Information Map (SAP HANA Administration Guide)

Administration Guide
Troubleshooting and Recovery Operations PUBLIC 153
7 SQL and System Views Reference for
Smart Data Integration and Smart Data
Quality

This section contains information about SQL syntax and system views that can be used in SAP HANA smart
data integration and SAP HANA smart data quality.

For complete information about all SQL statements and system views for SAP HANA and other SAP HANA
contexts, see the SAP HANA SQL and System Views Reference.

For information about the capabilities available for your license and installation scenario,refer to the Feature
Scope Description (FSD) for your specific SAP HANA version on the SAP HANA Platform page.

SQL Statements [page 154]


SAP HANA smart data integration and SAP HANA smart data quality support many SQL statements to
allow you to do such tasks as create agents and adapters, administer your system, and so on.

System Views [page 192]


System views allow you to query for various information about the system state using SQL commands.
The results appear as tables.

Related Information

SAP HANA SQL and System Views Reference

7.1 SQL Statements

SAP HANA smart data integration and SAP HANA smart data quality support many SQL statements to allow
you to do such tasks as create agents and adapters, administer your system, and so on.

ALTER ADAPTER Statement [Smart Data Integration] [page 156]


The ALTER ADAPTER statement alters an adapter. Refer to CREATE ADAPTER for a description of the
AT LOCATION clause.

ALTER AGENT Statement [Smart Data Integration] [page 158]


The ALTER AGENT statement changes an agent's host name and/or port and SSL property if it uses
the TCP protocol. It can also assign an agent to an agent group.

ALTER REMOTE SOURCE Statement [Smart Data Integration] [page 159]


The ALTER REMOTE SOURCE statement modifies the configuration of an external data source
connected to the SAP HANA database.

ALTER REMOTE SUBSCRIPTION Statement [Smart Data Integration] [page 163]

Administration Guide
154 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
The ALTER REMOTE SUBSCRIPTION statement allows the QUEUE command to initiate real-time data
processing, and the DISTRIBUTE command applies the changes.

CANCEL TASK Statement [Smart Data Integration] [page 164]


Cancels a task that was started with START TASK.

CREATE ADAPTER Statement [Smart Data Integration] [page 166]


The CREATE ADAPTER statement creates an adapter that is deployed at the specified location.

CREATE AGENT Statement [Smart Data Integration] [page 168]


The CREATE AGENT statement registers connection properties of an agent that is installed on another
host.

CREATE AGENT GROUP Statement [Smart Data Integration] [page 170]


The CREATE AGENT GROUP statement creates an agent clustering group to which individual agents
can be assigned.

CREATE AUDIT POLICY Statement [Smart Data Integration] [page 171]


The CREATE AUDIT POLICY statement creates a new audit policy, which can then be enabled and
cause the specified audit actions to occur.

CREATE REMOTE SOURCE Statement [Smart Data Integration] [page 173]


The CREATE REMOTE SOURCE statement defines an external data source connected to SAP HANA
database.

CREATE REMOTE SUBSCRIPTION Statement [Smart Data Integration] [page 174]


The CREATE REMOTE SUBSCRIPTION statement creates a remote subscription in SAP HANA to
capture changes specified on the entire virtual table or part of a virtual table using a subquery.

CREATE VIRTUAL PROCEDURE Statement [Smart Data Integration] [page 179]


Creates a virtual procedure using the specified programming language that allows execution of the
procedure body at the specified remote source.

DROP ADAPTER Statement [Smart Data Integration] [page 181]


The DROP ADAPTER statement removes an adapter from all locations.

DROP AGENT Statement [Smart Data Integration] [page 182]


The DROP AGENT statement removes an agent.

DROP AGENT GROUP Statement [Smart Data Integration] [page 183]


The DROP AGENT GROUP statement removes an agent clustering group.

DROP REMOTE SUBSCRIPTION Statement [Smart Data Integration] [page 184]


The DROP REMOTE SUBSCRIPTION statement drops an existing remote subscription.

GRANT Statement [Smart Data Integration] [page 185]


GRANT is used to grant privileges and structured privileges to users and roles. GRANT is also used to
grant roles to users and other roles.

PROCESS REMOTE SUBSCRIPTION EXCEPTION Statement [Smart Data Integration] [page 187]
The PROCESS REMOTE SUBSCRIPTION EXCEPTION statement allows the user to indicate how an
exception should be processed.

SESSION_CONTEXT Function [Smart Data Integration] [page 188]


Returns the value of session_variable assigned to the current user.

START TASK Statement [Smart Data Integration] [page 189]


Starts a task.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 155
Related Information

SQL Reference for Additional SAP HANA Contexts (SAP HANA SQL and System Views Reference)

7.1.1 ALTER ADAPTER Statement [Smart Data Integration]

The ALTER ADAPTER statement alters an adapter. Refer to CREATE ADAPTER for a description of the AT
LOCATION clause.

Syntax

ALTER ADAPTER <adapter_name> [PROPERTIES <properties>]


| {ADD | REMOVE} LOCATION {DPSERVER | AGENT <agent_name>}
| REFRESH AT LOCATION {DPSERVER | AGENT <agent_name>}

Syntax Elements

<adapter_name>

The name of the adapter to be altered.

<adapter_name> ::= <identifier>

<agent_name>

The agent name if the adapter is set up on the agent.

<agent_name> ::= <identifier>

<properties>

The optional properties of the adapter, such as display_name. If display_name is not


specified, then adapter_name appears in the user interface.

<properties> ::= <string_literal>

Description

The ALTER ADAPTER statement alters an adapter. Refer to CREATE ADAPTER for a description of the AT
LOCATION clause.

Administration Guide
156 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

System privilege: ADAPTER ADMIN

Examples

Add or remove an Create two agents and an adapter at the first agent:
existing adapter at agent
or Data Provisioning CREATE AGENT TEST_AGENT_1 PROTOCOL 'TCP' HOST 'test_host1'
Server PORT 5050;
CREATE AGENT TEST_AGENT_2 PROTOCOL 'HTTP';
CREATE ADAPTER TEST_ADAPTER AT LOCATION AGENT TEST_AGENT_1;

Add an existing adapter TEST_ADAPTER to agent TEST_AGENT_2:

ALTER ADAPTER TEST_ADAPTER ADD LOCATION AGENT TEST_AGENT_2;

Remove an existing adapter TEST_ADAPTER from agent TEST_AGENT_2:

ALTER ADAPTER TEST_ADAPTER REMOVE LOCATION AGENT


TEST_AGENT_2;

Add an existing adapter TEST_ADAPTER at the Data Provisioning Server:

ALTER ADAPTER TEST_ADAPTER ADD LOCATION DPSERVER;

Remove an existing adapter TEST_ADAPTER at Data Provisioning Server:

ALTER ADAPTER TEST_ADAPTER REMOVE LOCATION DPSERVER;

Refresh configuration Read configuration and query optimization capabilities of an adapter from the
and query optimization adapter setup at the agent or Data Provisioning Server:
capabilities of an adapter
ALTER ADAPTER TEST_ADAPTER REFRESH AT LOCATION DPSERVER;
ALTER ADAPTER TEST_ADAPTER REFRESH AT LOCATION AGENT
TEST_AGENT_2;

Update display name Change display name for an adapter to 'My Custom Adapter':
property of an adapter
ALTER ADAPTER TEST_ADAPTER PROPERTIES 'display_name=My
Custom Adapter';

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 157
7.1.2 ALTER AGENT Statement [Smart Data Integration]

The ALTER AGENT statement changes an agent's host name and/or port and SSL property if it uses the TCP
protocol. It can also assign an agent to an agent group.

Syntax

ALTER AGENT <agent_name>


HOST <agent_hostname> [ PORT <agent_port_number> ] [ { ENABLE | DISABLE } SSL ]
| PORT <agent_port_number> [ {ENABLE | DISABLE} SSL ]
| [ {ENABLE | DISABLE} SSL ]
| { SET | UNSET } AGENT GROUP <agent_group_name>

Syntax Elements

<agent_name>

The name of the agent to be altered.

<agent_name> ::= <identifier>

<agent_hostname>

The name of the agent host.

<agent_hostname> ::= <string_literal>

<agent_port_number>

Specifies whether the agent's TCP listener on the specified port uses SSL.

<agent_port_number> ::= <integer_literal> {ENABLE | DISABLE}


SSL

<agent_group_name>

The name of the agent clustering group to which the agent should be attached.

<agent_group_name> ::= <identifier>

Description

The ALTER AGENT statement changes an agent's host name and/or port if it uses the TCP protocol. It can also
assign an agent to an agent group.

Administration Guide
158 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::AlterAgent

System privilege: AGENT ADMIN

Examples

● Alter TEST_AGENT's hostname test_host and port to 5051, if it uses 'TCP' protocol

ALTER AGENT TEST_AGENT HOST 'test_host' PORT 5051;

● Alter TEST_AGENT's hostname test_host, if it uses 'TCP' protocol

ALTER AGENT TEST_AGENT HOST 'test_host';

● Alter TEST_AGENT's port to 5051, if it uses 'TCP' protocol

ALTER AGENT TEST_AGENT PORT 5051;

● Assign TEST_AGENT to agent group TEST_GROUP

ALTER AGENT TEST_AGENT SET AGENT GROUP TEST_GROUP;

● Remove TEST_AGENT from agent group TEST_GROUP

ALTER AGENT TEST_AGENT UNSET AGENT GROUP TEST_GROUP;

7.1.3 ALTER REMOTE SOURCE Statement [Smart Data


Integration]

The ALTER REMOTE SOURCE statement modifies the configuration of an external data source connected to
the SAP HANA database.

The ALTER REMOTE SOURCE SQL statement is available for use in other areas of SAP HANA, not only SAP
HANA smart data integration. Refer to the ALTER REMOTE SOURCE topic for complete information. This
information is specific to smart data integration functionality.

Syntax

ALTER REMOTE SOURCE <remote_source_name> <adapter_clause> [<credential_clause>]


| { SUSPEND | RESUME } { CAPTURE | DISTRIBUTION }
| { CLEAR OBJECTS | REFRESH OBJECTS | CANCEL REFRESH OBJECTS }
| START LATENCY MONITORING <latency_ticket_name> [ INTERVAL
<interval_in_seconds> ]

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 159
| STOP LATENCY MONITORING <latency_ticket_name>
| CLEAR LATENCY HISTORY [ <latency_ticket_name> ]

Syntax Elements

Syntax elements specific to smart data integration are described as follows. For information about syntax
elements that aren’t specific to smart data integration, refer to the ALTER REMOTE SOURCE topic.

<adapter_clause>

Adapter configuration.

<adapter_clause> ::= [ADAPTER <adapter_name>


[AT LOCATION { DPSERVER | AGENT <agent_name> | AGENT GROUP
<agent_group_name>}] <configuration_clause>]

<agent_name> ::= <identifier>


<agent_group_name> ::= <identifier>

<configuration_clause> ::= CONFIGURATION


'<configuration_xml_string>'

The <configuration_xml_string> is the XML-formatted configuration string for


the remote source.

Refer to CREATE ADAPTER for a description of the AT LOCATION clause.


{ SUSPEND | RESUME } { CAPTURE | DISTRIBUTION }

ALTER REMOTE SOURCE SUSPEND Suspends the adapter and agent from
CAPTURE reading any more changes from source
system. This is helpful when the source
system or SAP HANA is preparing for
planned maintenance or an upgrade.

ALTER REMOTE SOURCE Resumes the suspended adapter to read


<remote_source_name> RESUME changed data from source system.
CAPTURE

ALTER REMOTE SOURCE Suspends the application of real-time


<remote_source_name> SUSPEND changes in SAP HANA tables but collects
DISTRIBUTION changed data from the source system.

ALTER REMOTE SOURCE Resumes applying real-time changes in SAP


<remote_source_name> RESUME HANA tables.
DISTRIBUTION

{ CLEAR OBJECTS | REFRESH OBJECTS | CANCEL REFRESH OBJECTS }

ALTER REMOTE SOURCE Clears all the data received from the adapter
<remote_source_name> CLEAR for this remote source from HANA tables.
OBJECTS

Administration Guide
160 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
ALTER REMOTE SOURCE Starts building HANA dictionary tables that
<remote_source_name> REFRESH contain remote source objects.
OBJECTS

ALTER REMOTE SOURCE Cancels the long-running REFRESH


<remote_source_name> CANCEL background operation. This stops fetching
REFRESH OBJECTS records from the adapter but keeps the data
received so far form the remote source on
HANA tables.

ALTER REMOTE SOURCE <remote_source_name> START LATENCY MONITORING <ticket_name>

Starts the collection of latency statistics one time or at regular intervals. The user
specifies a target latency ticket in the monitoring view.
ALTER REMOTE SOURCE <remote_source_name> STOP LATENCY MONITORING <ticket_name>

Stops the collection of latency statistics into the given latency ticket.
ALTER REMOTE SOURCE <remote_source_name> CLEAR LATENCY HISTORY

Clears the latency statistics (for either one latency ticket, or for the whole remote
source, from the monitoring view.

Description

The ALTER REMOTE SOURCE statement modifies the configuration of an external data source connected to
the SAP HANA database. Only database users with the object privilege ALTER for remote sources may alter
remote sources.

 Note

You may not change a user name while a remote source is suspended.

Permissions

This statement requires the ALTER object privilege on the remote source.

Examples

ALTER REMOTE SOURCE "odata_nw" ADAPTER "ODataAdapter"


AT LOCATION DPSERVER
CONFIGURATION '<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ConnectionProperties name="connection_properties">
<PropertyEntry name="URL">http://services.odata.org/Northwind/Northwind.svc/</
PropertyEntry>
<PropertyEntry name="proxyserver">proxy</PropertyEntry>

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 161
<PropertyEntry name="proxyport">8080</PropertyEntry> <PropertyEntry
name="truststore"></PropertyEntry>
<PropertyEntry name="supportformatquery"></PropertyEntry>
</ConnectionProperties>' WITH CREDENTIAL TYPE 'PASSWORD'
USING '<CredentialEntry name="password"><user></user><password></password></
CredentialEntry>';

The configuration clause must be a structured XML string that defines the settings for the remote source. For
example, the CONFIGURATION string in the following example configures a remote source for an Oracle
database.

CONFIGURATION '<?xml version="1.0" encoding="UTF-8"?>


<ConnectionProperties name="configurations">
<PropertyGroup name="generic">
<PropertyEntry name="instance_name">ora_inst</PropertyEntry>
<PropertyEntry name="admin_port">12345</PropertyEntry>
<PropertyEntry name="map_char_types_to_unicode">false</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="database">
<PropertyEntry name="cdb_enabled">false</PropertyEntry>
<PropertyEntry name="pds_use_tnsnames">false</PropertyEntry>
<PropertyEntry name="pds_host_name"><db_hostname></PropertyEntry>
<PropertyEntry name="pds_port_number">1521</PropertyEntry>
<PropertyEntry name="pds_database_name">ORCL</PropertyEntry>
<PropertyEntry name="cdb_service_name"></PropertyEntry>
<PropertyEntry name="pds_service_name"></PropertyEntry>
<PropertyEntry name="pds_tns_filename"></PropertyEntry>
<PropertyEntry name="pds_tns_connection"></PropertyEntry>
<PropertyEntry name="cdb_tns_connection"></PropertyEntry>
<PropertyEntry name="_pds_tns_connection_with_cdb_enabled"></
PropertyEntry>
<PropertyEntry name="pds_byte_order"></PropertyEntry>
</PropertyGroup>
<PropertyGroup name="schema_alias_replacements">
<PropertyEntry name="schema_alias"></PropertyEntry>
<PropertyEntry name="schema_alias_replacement"></PropertyEntry>
</PropertyGroup>
<PropertyGroup name="security">
<PropertyEntry name="pds_use_ssl">false</PropertyEntry>
<PropertyEntry name="pds_ssl_sc_dn"></PropertyEntry>
<PropertyEntry name="_enable_ssl_client_auth">false</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="jdbc_flags">
<PropertyEntry name="remarksReporting">false</PropertyEntry>
</PropertyGroup>
<PropertyGroup name="cdc">
<PropertyGroup name="databaseconf">
<PropertyEntry name="pdb_timezone_file"><timezone_file></
PropertyEntry>
<PropertyEntry name="pdb_archive_path"></PropertyEntry>
<PropertyEntry name="pdb_supplemental_logging_level">table</
PropertyEntry>
</PropertyGroup>
<PropertyGroup name="parallelscan">
<PropertyEntry name="lr_parallel_scan">false</PropertyEntry>
<PropertyEntry name="lr_parallel_scanner_count"></PropertyEntry>
<PropertyEntry name="lr_parallel_scan_queue_size"></
PropertyEntry>
<PropertyEntry name="lr_parallel_scan_range"></PropertyEntry>
</PropertyGroup>
<PropertyGroup name="logreader">
<PropertyEntry name="skip_lr_errors">false</PropertyEntry>
<PropertyEntry name="lr_max_op_queue_size">1000</PropertyEntry>
<PropertyEntry name="lr_max_scan_queue_size">1000</PropertyEntry>
<PropertyEntry name="lr_max_session_cache_size">1000</
PropertyEntry>
<PropertyEntry name="scan_fetch_size">10</PropertyEntry>

Administration Guide
162 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
<PropertyEntry name="pdb_dflt_column_repl">true</PropertyEntry>
<PropertyEntry name="pdb_ignore_unsupported_anydata">false</
PropertyEntry>
<PropertyEntry name="pds_sql_connection_pool_size">15</
PropertyEntry>
<PropertyEntry name="pds_retry_count">5</PropertyEntry>
<PropertyEntry name="pds_retry_timeout">10</PropertyEntry>
</PropertyGroup>
</PropertyGroup>
</ConnectionProperties>'

Related Information

ALTER REMOTE SOURCE Statement (Access Control) (SAP HANA SQL and System Views Reference)

7.1.4 ALTER REMOTE SUBSCRIPTION Statement [Smart


Data Integration]

The ALTER REMOTE SUBSCRIPTION statement allows the QUEUE command to initiate real-time data
processing, and the DISTRIBUTE command applies the changes.

Syntax

ALTER REMOTE SUBSCRIPTION [<schema_name>.]<subscription_name>


{ QUEUE | DISTRIBUTE | RESET }

Syntax Elements

<subscription_name>

The name of the remote subscription.

<subscription_name> ::= <identifier>

Description

The ALTER REMOTE SUBSCRIPTION statement allows the QUEUE command to initiate real-time data
processing, and the DISTRIBUTE command applies the changes. Typically, the initial load of data is preceded
by QUEUE command. The DISTRIBUTE command is used when initial load completes. The RESET command
can be used to reset the real-time process to start from the initial load again.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 163
Permissions

This statement requires the ALTER object privilege on the remote source.

Example

Capture changes from a virtual table to an SAP HANA table.

CREATE AGENT TEST_AGENT PROTOCOL 'TCP' HOST 'test_host1' PORT 5050;


CREATE ADAPTER 'DB2ECCAdapter' AT LOCATION AGENT TEST_AGENT;
CREATE REMOTE SOURCE MYECC ADAPTER 'DB2ECCAdapter' CONFIGURATION
'<configuration_xml>' AT LOCATION AGENT TEST_AGENT;
CREATE VIRTUAL TABLE MARA_VT AT MYECC."<NULL>"."<NULL>".MARA;
CREATE COLUMN TABLE TGT_MARA LIKE MARA_VT;
CREATE REMOTE SUBSCRIPTION TEST_SUB ON MARA_VT TARGET TABLE TGT_MARA;
ALTER REMOTE SUBSCRIPTION TEST_SUB QUEUE;

Perform initial load of data using INSERT-SELECT or a TASK.

INSERT INTO TGT_MARA SELECT * FROM MARA_VT;


ALTER REMOTE SUBSCRIPTION TEST_SUB DISTRIBUTE;

Now insert or update a material record in ECC system and see it updated to TGT_MARA table in SAP HANA.
Reset the real-time process and restart the load.

ALTER REMOTE SUBSCRIPTION TEST_SUB RESET;


ALTER REMOTE SUBSCRIPTION TEST_SUB QUEUE;

Perform initial load of data using INSERT-SELECT or a TASK.

INSERT INTO TGT_MARA SELECT * FROM MARA_VT;


ALTER REMOTE SUBSCRIPTION TEST_SUB DISTRIBUTE;

7.1.5 CANCEL TASK Statement [Smart Data Integration]

Cancels a task that was started with START TASK.

Syntax

CANCEL TASK <task_execution_id> [WAIT <wait_time_in_seconds>]

Administration Guide
164 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Syntax Elements

<task_execution_id> ::= <unsigned_integer>

Specifies the task execution ID to cancel. See the START TASK topic for more information about
TASK_EXECUTION_ID.

<wait_time_in_seconds> ::= <identifier>

Number of seconds to wait for the task to cancel before returning from the command.

Description

Cancels a task that was started with START TASK.

The default behavior is for the CANCEL TASK command to return after sending the cancel request. Optionally,
a WAIT value can be specified where the command will wait for the task to actually cancel before returning. If
the command has waited the specified amount of time, then the CANCEL TASK will error out with the error
code 526 (request to cancel task was sent but task did not cancel before timeout was reached).

 Note

If the WAIT value is 0, the command returns immediately after sending the cancel request, as it would if no
WAIT value were entered.

Permissions

The user that called START TASK can implicitly CANCEL; otherwise, the CATALOG READ and SESSION ADMIN
roles are required.

Examples

Assuming that a TASK performTranslation was already started using START TASK and has a task execution ID
of 255, it would be cancelled using the following commands. The behavior is the same for the following two
cases:

CANCEL TASK 255;

CANCEL TASK 255 WAIT 0;

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 165
Assuming that a TASK performTranslation was already started using START TASK and has a task execution id
of 256 and the user wants to wait up to 5 seconds for the command to cancel, it would be cancelled using the
following command:

CANCEL TASK 256 WAIT 5;

If the task was able to cancel within 5 seconds, the CANCEL TASK will return as a success. If it didn't cancel
within 5 seconds, then the return will be the error code 526.

SQL Script

You can call CANCEL TASK within the SQL Script CREATE PROCEDURE. Refer to the SAP HANA SQL Script
Reference for complete details about CREATE PROCEDURE.

CREATE PROCEDURE "CANCEL_TASK"."CANCEL_MY_TASK"(in exec_id INT)


LANGUAGE SQLSCRIPT AS
BEGIN
CANCEL TASK :exec_id;
END;

CANCEL TASK is not supported in:

● Table UDF
● Scalar UDF
● Trigger
● Read-only procedures

Related Information

START TASK Statement [Smart Data Integration] [page 189]

7.1.6 CREATE ADAPTER Statement [Smart Data Integration]

The CREATE ADAPTER statement creates an adapter that is deployed at the specified location.

Syntax

CREATE ADAPTER <adapter_name> [PROPERTIES <properties>] AT LOCATION


{DPSERVER | AGENT <agent_name>}

Administration Guide
166 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Syntax Elements

<adapter_name>

The name of the adapter to be created.

<adapter_name> ::= <identifier>

<agent_name>
The agent name if the adapter is set up on the agent.

<agent_name> ::= <identifier>

<properties>

The optional properties of the adapter, such as display_name. When display_name is


not specified, then adapter_name displays in the user interface.

<properties> ::= <string_literal>

AT LOCATION DPSERVER

The adapter runs inside the Data Provisioning Server process in SAP HANA.
AT LOCATION
Specify an agent that is set up outside of SAP HANA for the adapter to run inside.

AT LOCATION AGENT <agent_name>

Description

The CREATE ADAPTER statement creates an adapter that is deployed at the specified location. The adapter
must be set up on the location prior to running this statement. When the statement is executed, the Data
Provisioning Server contacts the adapter to retrieve its configuration details such as connection properties and
query optimization capabilities.

Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::AddLocationToAdapter

System privilege: ADAPTER ADMIN

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 167
Examples

Create an adapter at the Data Create an adapter TEST_ADAPTER running in the Data Provisioning Server.
Provisioning Server
CREATE ADAPTER TEST_ADAPTER AT LOCATION DPSERVER;

Create an adapter at a specified Create an agent with name TEST_AGENT.


agent
CREATE AGENT TEST_AGENT PROTOCOL 'TCP' HOST
'test_host' PORT 5050;

Create an adapter TEST_ADAPTER on agent TEST_AGENT.

CREATE ADAPTER TEST_ADAPTER AT LOCATION AGENT


TEST_AGENT;

7.1.7 CREATE AGENT Statement [Smart Data Integration]

The CREATE AGENT statement registers connection properties of an agent that is installed on another host.

Syntax

CREATE AGENT <agent_name> PROTOCOL { 'HTTP' | 'TCP' HOST <agent_hostname> PORT


<agent_port_number> [{ENABLE | DISABLE} SSL]} [AGENT GROUP <agent_group_name>]

Syntax Elements

<agent_name>

The name of the agent to be created and its protocol.

<agent_name> ::= <identifier>

PROTOCOL
The protocol for the agent.

HTTP Agent uses HTTP protocol for communication with DP server. Use this
protocol when the SAP HANA database is on the cloud.

PROTOCOL 'HTTP'

Administration Guide
168 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
TCP Agent uses TCP protocol and listens on the specified port to receive
requests from DP server. Use this protocol when the SAP HANA
database can connect to agent's TCP port.

PROTOCOL 'TCP' HOST <agent_hostname> PORT


<agent_port_number>

<agent_hostname> ::= <string_literal>


<agent_port_number> ::= <integer_literal>

DP server connects to the agent listening on the specified hostname


and port. Use this protocol when the SAP HANA database is on-
premise.

{ENABLE | Specifies if agent's TCP listener on the specified port uses SSL.
DISABLE}
SSL

<agent_group_name>

The name of the agent clustering group to which the agent should belong.

<agent_group_name> ::= <identifier>

Description

The CREATE AGENT statement registers connection properties of an agent that is installed on another host.
The DP server and agent use these connection properties when establishing communication channel.

Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::CreateAgent

System privilege: ADAPTER ADMIN

Examples

Create an agent with TCP Create an agent TEST_AGENT running on test_host and port 5050.
protocol
CREATE AGENT TEST_AGENT PROTOCOL 'TCP' HOST
'test_host' PORT 5050;

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 169
Create an agent with HTTP Create an agent TEST_AGENT that uses HTTP.
protocol
CREATE AGENT TEST_AGENT PROTOCOL 'HTTP';

Create an agent with HTTP Create an agent TEST_AGENT that uses HTTP and belongs to agent
protocol in an agent group clustering group TEST_GROUP.

CREATE AGENT TEST_AGENT PROTOCOL 'HTTP' AGENT GROUP


TEST_GROUP;

7.1.8 CREATE AGENT GROUP Statement [Smart Data


Integration]

The CREATE AGENT GROUP statement creates an agent clustering group to which individual agents can be
assigned.

Syntax

CREATE AGENT GROUP <agent_group_name>

Syntax Elements

<agent_group_name>

The name of the agent group to create.

<agent_group_name> ::= <identifier>

Description

The CREATE AGENT GROUP statement creates an agent clustering group to which individual agents can be
assigned. An agent group can be used instead of a single agent to provide fail-over capabilities.

Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::CreateAgent

Administration Guide
170 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
System privilege: ADAPTER ADMIN

Examples

Create an agent group named TEST_GROUP.

CREATE AGENT GROUP TEST_GROUP;

Related Information

ALTER AGENT Statement [Smart Data Integration] [page 158]


CREATE AGENT Statement [Smart Data Integration] [page 168]

7.1.9 CREATE AUDIT POLICY Statement [Smart Data


Integration]

The CREATE AUDIT POLICY statement creates a new audit policy, which can then be enabled and cause the
specified audit actions to occur.

The CREATE AUDIT POLICY SQL statement is available for use in other areas of SAP HANA, not only SAP
HANA smart data integration. Refer to the CREATE AUDIT POLICY topic for complete information. The
information below is specific to smart data integration functionality.

Syntax

Refer to the SAP HANA SQL and System Views Reference for complete information about CREATE AUDIT
POLICY syntax.

Syntax Elements

Syntax elements specific to smart data integration are described below. For information about syntax elements
that are not specific to smart data integration, refer to the SAP HANA SQL and System Views Reference.

<audit_action_name> ::= CREATE AGENT


| ALTER AGENT
| DROP AGENT
| CREATE AGENT GROUP
| DROP AGENT GROUP
| CREATE ADAPTER
| ALTER ADAPTER

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 171
| DROP ADAPTER
| CREATE REMOTE SUBSCRIPTION
| ALTER REMOTE SUBSCRIPTION
| DROP REMOTE SUBSCRIPTION
| PROCESS REMOTE SUBSCRIPTION EXCEPTION

Audit Action Name Group Number Audit Operation

CREATE AGENT 17 Registering a Data Provisioning Agent

ALTER AGENT 17 Altering a Data Provisioning Agent's


registration

DROP AGENT 17 Dropping a Data Provisioning Agent


registration

CREATE ADAPTER 17 Registering a Data Provisioning Adapter

ALTER ADAPTER 17 Altering the registration of a Data Provi­


sioning Adapter

DROP ADAPTER 17 Dropping the registration of a Data Pro­


visioning Adapter

CREATE REMOTE SUBSCRIPTION 17 Creating a subscription to a remote


source

ALTER REMOTE SUBSCRIPTION 17 Altering a subscription to a remote


source

DROP REMOTE SUBSCRIPTION 17 Dropping a subscription to a remote


source

PROCESS REMOTE SUBSCRIPTION 17 Processing exceptions raised by a sub­


EXCEPTION scribed remote source

Description

The CREATE AUDIT POLICY statement creates a new audit policy. This audit policy can then be enabled and
cause the auditing of the specified audit actions to occur.

Permissions

Only database users with the CATALOG READ or INIFILE ADMIN system privilege can view information in the
M_INIFILE_CONTENTS view. For other database users, this view is empty. Users with the AUDIT ADMIN
privilege can see audit-relevant parameters.

Related Information

CREATE AUDIT POLICY Statement (Access Control) (SAP HANA SQL and System Views Reference)

Administration Guide
172 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
7.1.10 CREATE REMOTE SOURCE Statement [Smart Data
Integration]

The CREATE REMOTE SOURCE statement defines an external data source connected to SAP HANA database.

The CREATE REMOTE SOURCE SQL statement is available for use in other areas of SAP HANA, not only SAP
HANA smart data integration. Refer to the CREATE REMOTE SOURCE topic for complete information. The
information below is specific to smart data integration functionality.

Syntax

Refer to the SAP HANA SQL and System Views Reference for complete information about CREATE REMOTE
SOURCE syntax.

Syntax Elements

Syntax elements specific to smart data integration are described below. For information about syntax elements
that are not specific to smart data integration, refer to the SAP HANA SQL and System Views Reference.
<adapter_clause>
Configures the adapter.

<adapter_clause> ::= ADAPTER <adapter_name>


[AT LOCATION {DPSERVER | AGENT <agent_name> | AGENT GROUP
<agent_group_name>} ]
CONFIGURATION <connection_info_string>

<agent_name> ::= <identifier>


<agent_group_name> ::= <identifier>

Refer to CREATE ADAPTER for description of AT LOCATION.

Description

The CREATE REMOTE SOURCE statement defines an external data source connected to SAP HANA database.
Only database users having the system privilege CREATE SOURCE or DATA ADMIN are allowed to add a new
remote source.

Permissions

This statement requires the CREATE SOURCE system privilege.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 173
Related Information

CREATE REMOTE SOURCE Statement (Access Control) (SAP HANA SQL and System Views Reference)

7.1.11 CREATE REMOTE SUBSCRIPTION Statement [Smart


Data Integration]

The CREATE REMOTE SUBSCRIPTION statement creates a remote subscription in SAP HANA to capture
changes specified on the entire virtual table or part of a virtual table using a subquery.

Syntax

CREATE REMOTE SUBSCRIPTION [<schema_name>.]<subscription_name>


{
{ON [<schema_name>.]<virtual_table_name> } |
{AS (<subquery>)}
}
[ WITH [ RESTRICTED ] SCHEMA CHANGES ]
{ TARGET TABLE <table_spec> <load_behavior> } |
{ TARGET TASK <task_spec> } |
{ PROCEDURE <proc_spec> }

Syntax Elements

<subscription_name>

The name of the remote subscription.

<subscription_name> ::= <identifier>

ON [<schema_name>.]<virtual_table_name>
See "Remote subscription for TARGET TASK or TARGET TABLE using ON Clause"
below.
AS (<subquery>)
See "Remote subscription for TARGET TASK or TARGET TABLE using AS Clause"
below.
[WITH [RESTRICTED] SCHEMA CHANGES]
Include this clause to propagate source schema changes to the SAP HANA virtual table
and remote subscription target table.

WITH SCHEMA CHANGES corresponds to the replication task options Initial + realtime
with structure or Realtime only with structure and the flowgraph options Real-time and
with Schema Change.

Administration Guide
174 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
With the optional RESTRICTED clause, WITH RESTRICTED SCHEMA CHANGES
propagates source schema changes only to the SAP HANA virtual table, and not the
remote subscription target table.
<table_spec>
The table definition.

<table_spec> ::= [<schema_name>].<table_name>

<load_behavior>

[ CHANGE TYPE COLUMN <column_name> CHANGE TIME COLUMN


<column_name> CHANGE SEQUENCE COLUMN <column_name> { INSERT |
UPSERT } ]

CHANGE { TYPE | TIME | SEQUENCE } COLUMN <column_name>

For a target table that logs the loading history, these parameters specify the target
column names that will show the change type and corresponding timestamp for each
operation. The CHANGE TYPE COLUMN <column_name> displays I, U, or D for INSERT,
UPSERT, or DELETE. In the case when multiple operations of the same type occur on
the same source row with the same timestamp (because the operations are in the same
transaction), use the CHANGE SEQUENCE COLUMN <column_name>, which adds an
incremental digit to distinguish the operations.

The load behavior options are:

UPSERT: INSERT and UPDATE apply as is, DELETE converts to UPDATE

INSERT: INSERT applies as is, UPDATE and DELETE convert to INSERT

The following example is for UPSERT for a remote subscription called


user.subscription on a source table called SQLServer_dbo.table. The target
table user.table includes a column called CHANGE_TYPE (with a data type of
VARCHAR or NVARCHAR) and a column CHANGE_TIME (with a data type of
TIMESTAMP).

CREATE REMOTE SUBSCRIPTION user.subscription


ON "user"."SQLServer_dbo.table"
TARGET TABLE user.table
CHANGE TYPE COLUMN "CHANGE_TYPE"
CHANGE TIME COLUMN "CHANGE_TIME"
UPSERT;

The following example for INSERT is for the same remote subscription and includes the
CHANGE_TIME column.

CREATE REMOTE SUBSCRIPTION user.subscription


ON "user"."SQLServer_dbo.table"
TARGET TABLE user.table
CHANGE TYPE COLUMN "CHANGE_TYPE"
CHANGE TIME COLUMN "CHANGE_TIME"
CHANGE SEQUENCE COLUMN "CHANGE_SEQUENCE"
INSERT;

<task_spec>

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 175
The task definition.

<task_spec>::= TARGET TASK[ <schema_name>].<task_name>


[(<var_list>)]
[PROCEDURE PARAMETERS{<param_list>)]

<var_list> specifies one or more start task variables.

<var_list> ::= <start_task_var>[{, <start_task_var>}...]

<start_task_var> specifies the name and value for a start task variable.

<start_task_var> ::= <var_name> => <var_value>

<var_name> is the name of variable that was defined within the task plan.

Variable values provided in this section will be used at runtime (for example, when
executing the task using START TASK).

<var_name> ::= <identifier>

<var_value> is the value that should be used in place of the variable name specified
when executing the task.

<var_value> ::= <string_literal>

<param_list> specifies one or more start task parameters.

<param_list> ::= <start_task_param> [{, <start_task_param>}...]

<start_task_param> ::= <identifier>

If the task uses table types for input and/or output, then the task expects actual table,
virtual table, or view names at runtime. These actual tables, virtual tables, or view
names are specified as task parameters. Depending on the type of remote subscription
being created, the task parameters may or may not need actual table, virtual table, or
view names for specific parameters (see below for more details).
<proc_spec>

{ PROCEDURE [<schema_name>.]<proc_name>[(<param_list>)] }

Description

The CREATE REMOTE SUBSCRIPTION statement creates a remote subscription in SAP HANA to capture
changes specified on the entire virtual table or part of a virtual table using a subquery. The changed data can
be applied to an SAP HANA target table or passed to a TASK or PROCEDURE if the changes require
transformation. The owner of the remote subscription must have the following privileges:

● SELECT privilege on tables specified in the ON or AS <subquery> clauses


● INSERT, UPDATE, DELETE privileges on the target table

Administration Guide
176 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
● EXECUTE privilege on the stored procedure
● START TASK privilege on the task

 Note

If you create a remote subscription using the CREATE REMOTE SUBSCRIPTION SQL statement, use
technical user for the Credentials Mode parameter when creating a remote source.

Permissions

This statement requires the CREATE REMOTE SUBSCRIPTION object privilege on the remote source.

Remote subscription for TARGET TASK or TARGET TABLE using ON Clause

CREATE REMOTE SUBSCRIPTION [<schema_name>.]<subscription_name>


ON [<schema_name>.]<virtual_table_name>
TARGET TASK [<schema_name>].<task_name>[(<var_list>)] [PROCEDURE
PARAMETERS(<param_list>)]

<param_list> must contain one of the parameters as [<schema_name>.]<virtual_table_name>. This


parameter must be the same schema and virtual table name as specified in the ON clause. Only one parameter
in <param_list> can be a virtual table.

Each parameter in <param_list> is used in comparing its columns with columns for the corresponding table
type defined in the task plan. Hence, the order of parameters in <param_list> must match the order of table
types defined in the task plan for input and output sources.

The task plan table type corresponding to the procedure parameter


[<schema_name>.]<virtual_table_name> must have the same columns (excluding _OP_CODE and
_COMMIT_TIMESTAMP). This table type must have _OP_CODE as the last but one column and
_COMMIT_TIMESTAMP as the last column.

Remote subscription for TARGET TASK or TARGET TABLE using AS Clause

CREATE REMOTE SUBSCRIPTION [<schema_name>.]<subscription_name>


AS <subquery>
TARGET TASK [<schema_name>].<task_name>[(<var_list>)] [PROCEDURE
PARAMETERS(<param_list>)]

The AS (<subquery>) part of the syntax lets you define the SQL and the columns to use for the subscription.
The subquery should be a simple SELECT <column_list> from <virtual_table> and should not contain a
WHERE clause. The <column_list> should match the target table schema in column order and name.

<param_list> must contain one of the parameters as table type and this table type (schema and name) must
be the same as the one defined in the task plan. This table type must also have the same columns as being

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 177
output by the subquery (excluding _OP_CODE and _COMMIT_TIMESTAMP). This table type must have
_OP_CODE as the last but one column and _COMMIT_TIMESTAMP as the last column. Only one parameter in
<param_list> can be a table type.

Each parameter in <param_list> is used in comparing its columns with columns for the corresponding table
type defined in the task plan. Hence the order of parameters in <param_list> must match the order of table
types defined in task plan for input and output sources.

Example

Create a remote subscription on a virtual table and apply changes using a real-time task.

CREATE SCHEMA "IM_SERVICES";


DROP REMOTE SOURCE "OracleAdapter" CASCADE;
CREATE REMOTE SOURCE "OracleAdapter" ADAPTER "OracleAdapter" AT LOCATION
dpserver CONFIGURATION '' WITH CREDENTIAL TYPE 'PASSWORD' USING '';
DROP TABLE "SYSTEM"."VT_EMPLOYEE_PK_TABLE";
CREATE VIRTUAL TABLE "SYSTEM"."VT_EMPLOYEE_PK_TABLE" AT
"OracleAdapter"."<NULL>"."<NULL>"."employee_pk_table";
DROP TYPE "IM_SERVICES"."TT_PARAM_IN";
DROP TYPE "IM_SERVICES"."TT_PARAM_OUT";
CREATE TYPE "IM_SERVICES"."TT_PARAM_IN" AS TABLE ("empno" integer, "deptid"
integer, "empname" VARCHAR(200), "salary" decimal(28,7), "bonus" double,
"_OP_CODE" VARCHAR(1),"_COMMIT_TIMESTAMP" SECONDDATE);
CREATE TYPE "IM_SERVICES"."TT_PARAM_OUT" AS TABLE ("empno" integer, "deptid"
integer, "empname" VARCHAR(200), "salary" decimal(28,7), "bonus" double);
DROP TABLE "IM_SERVICES"."T_OUT";
CREATE COLUMN TABLE "IM_SERVICES"."T_OUT" LIKE "IM_SERVICES"."TT_PARAM_OUT" ;
DROP TASK "IM_SERVICES"."TSKM_RT_VAR";
DROP REMOTE SUBSCRIPTION "IM_SERVICES"."RSUB_VAR";
CREATE REMOTE SUBSCRIPTION "IM_SERVICES"."RSUB_VAR"
AS (SELECT "empno","deptid","empname","salary","bonus" FROM
"SYSTEM"."VT_EMPLOYEE_PK_TABLE")
TARGET TASK "IM_SERVICES"."TSKM_RT_VAR" ("expr_var01_in1" => '100',
"expr_var02_in2" => 'upper(''walkerIN'')')
PROCEDURE PARAMETERS ( "IM_SERVICES"."TT_PARAM_IN", "IM_SERVICES"."T_OUT");
DROP REMOTE SUBSCRIPTION "IM_SERVICES"."RSUB_VAR";
CREATE REMOTE SUBSCRIPTION "IM_SERVICES"."RSUB_VAR"
ON "SYSTEM"."VT_EMPLOYEE_PK_TABLE"
TARGET TASK "IM_SERVICES"."TSKM_RT_VAR" ("expr_var01_in1" => '100',
"expr_var02_in2" => 'upper(''walkerIN'')')
PROCEDURE PARAMETERS ( "SYSTEM"."VT_EMPLOYEE_PK_TABLE", "IM_SERVICES"."T_OUT");
SELECT * FROM "SYS"."REMOTE_SUBSCRIPTIONS_";
TRUNCATE TABLE "IM_SERVICES"."T_OUT";
ALTER REMOTE SUBSCRIPTION "IM_SERVICES"."RSUB_VAR" QUEUE;
ALTER REMOTE SUBSCRIPTION "IM_SERVICES"."RSUB_VAR" DISTRIBUTE;

Related Information

SQL Notation Conventions (SAP HANA SQL and System Views Reference)
Data Types (SAP HANA SQL and System Views Reference)

Administration Guide
178 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
7.1.12 CREATE VIRTUAL PROCEDURE Statement [Smart Data
Integration]

Creates a virtual procedure using the specified programming language that allows execution of the procedure
body at the specified remote source.

The CREATE VIRTUAL PROCEDURE SQL statement is available for use in other areas of SAP HANA, not only
SAP HANA smart data integration. Refer to the CREATE VIRTUAL PROCEDURE Statement (Procedural) topic
for complete information. The information below is specific to smart data integration functionality.

Syntax

CONFIGURATION <configuration_json_string>

Syntax Elements

<configuration_json_string>

A JSON string that includes required source procedure parameters.

Description

The CREATE VIRTUAL PROCEDURE statement creates a new virtual procedure from a remote source
procedure. When creating a virtual procedure using the SQL Console:

1. Return the metadata of the source procedure [number, types, and configuration (JSON) string] by invoking
the built-in SAP HANA procedure:

"PUBLIC"."GET_REMOTE_SOURCE_FUNCTION_DEFINITION"
('<remote_source_name>','<remote_object_unique_name>',?,?,?);

2. Edit the CONFIGURATION JSON string to include the appropriate parameter values.

Permissions

This statement requires the CREATE VIRTUAL PROCEDURE object privilege on the remote source.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 179
Example

If you use the SQL Console to create a virtual procedure, the following example illustrates an ABAP adapter.

CREATE VIRTUAL PROCEDURE BAPI_BANK_GETLIST (


IN BANK_CTRY NVARCHAR(6) ,
IN MAX_ROWS INT,
OUT RETURN_TYPE NVARCHAR (2),
OUT RETURN_ID NVARCHAR (40),
OUT RETURN_NUMBER VARCHAR (6) ,
OUT RETURN_MESSAGE NVARCHAR (440) ,
OUT RETURN_LOG_NO NVARCHAR (40),
OUT RETURN_LOG_MSG_NO VARCHAR (12),
OUT RETURN_MESSAGE_V1 NVARCHAR (100) ,
OUT RETURN_MESSAGE_V2 NVARCHAR (100),
OUT RETURN_MESSAGE_V3 NVARCHAR (100) ,
OUT RETURN_MESSAGE_V4 NVARCHAR (100),
OUT RETURN_PARAMETER NVARCHAR (64),
OUT RETURN_ROW INTEGER,
OUT RETURN_FIELD NVARCHAR (60),
OUT RETURN_SYSTEM NVARCHAR (20),
IN BANK_LIST_IN TABLE (
BANK_CTRY NVARCHAR (6),
BANK_KEY NVARCHAR (30),
BANK_NAME NVARCHAR (120) ,
CITY NVARCHAR (70)
),
OUT BANK_LIST TABLE (
BANK_CTRY NVARCHAR (6) ,
BANK_KEY NVARCHAR (30) ,
BANK_NAME NVARCHAR (120) ,
CITY NVARCHAR (70)
)
) CONFIGURATION '
{
"__DP_UNIQUE_NAME__": "BAPI_BANK_GETLIST",
"__DP_VIRTUAL_PROCEDURE__": true
}' AT "QA1";

Then call the procedure as follows:

CALL bapi_bank_getlist('DE', 1000, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,


bank_list, ?);

[where "bank_list" is a table of type

TABLE (
BANK_CTRY NVARCHAR (6),
BANK_KEY NVARCHAR (30),
BANK_NAME NVARCHAR (120) ,
CITY NVARCHAR (70)
)

For more information about using the SQL Console, see the SAP HANA Administration Guide.

Administration Guide
180 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
7.1.13 DROP ADAPTER Statement [Smart Data Integration]

The DROP ADAPTER statement removes an adapter from all locations.

Syntax

DROP ADAPTER <adapter_name> [<drop_option>]

Syntax Elements

<adapter_name>

The name of the adapter to be dropped.

<adapter_name> ::= <identifier>

<drop_option>

When <drop_option> is not specified, a restrict drop will be performed.

<drop_option> ::= CASCADE | RESTRICT

CASCADE drops the adapter and dependent objects.

Description

The DROP ADAPTER statement removes an adapter from all locations.

Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::RemoveLocationFromAdapter

System privilege: ADAPTER ADMIN

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 181
Example

Create two agents and an adapter at both the agents.

CREATE AGENT TEST_AGENT_1 PROTOCOL 'TCP' HOST 'test_host1' PORT 5050;


CREATE AGENT TEST_AGENT_2 PROTOCOL 'HTTP';
CREATE ADAPTER TEST_ADAPTER AT LOCATION AGENT TEST_AGENT_1;
ALTER ADAPTER TEST_ADAPTER ADD LOCATION AGENT TEST_AGENT_2;
--Drop adapter TEST_ADAPTER.
DROP ADAPTER TEST_ADAPTER;

7.1.14 DROP AGENT Statement [Smart Data Integration]

The DROP AGENT statement removes an agent.

Syntax

DROP AGENT <agent_name> [<drop_option>]

Syntax Elements

<agent_name>

The name of the agent to be dropped.

<agent_name> ::= <identifier>

<drop_option>

When <drop_option> is not specified, a restrict drop is performed.

<drop_option> ::= CASCADE | RESTRICT

CASCADE drops the agent and its dependent objects.

RESTRICT drops the agent only if it does not have any dependent objects.

Description

The DROP AGENT statement removes an agent.

Administration Guide
182 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::DropAgent

System privilege: AGENT ADMIN

Example

Create an agent TEST_AGENT and adapter CUSTOM_ADAPTER on the agent. Make sure that the custom
adapter is setup on the agent.

CREATE AGENT TEST_AGENT PROTOCOL 'TCP' HOST 'test_host' PORT 5050;


CREATE ADAPTER CUSTOM_ADAPTER AT LOCATION AGENT TEST_AGENT;

Drop the agent called TEST_AGENT.

DROP AGENT TEST_AGENT;

7.1.15 DROP AGENT GROUP Statement [Smart Data


Integration]

The DROP AGENT GROUP statement removes an agent clustering group.

Syntax

DROP AGENT GROUP <agent_group_name>

Syntax Elements

<agent_group_name>

The name of the agent group to be dropped.

<agent_group_name> ::= <identifier>

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 183
Description

The DROP AGENT GROUP statement removes an agent clustering group. All dependent objects must be
removed before an agent clustering group can be dropped.

Permissions

Role: sap.hana.im.dp.monitor.roles::Operations

Application privilege: sap.hana.im.dp.monitor::DropAgent

System privilege: AGENT ADMIN

Example

Create an agent group TEST_GROUP.

CREATE AGENT GROUP TEST_GROUP;

Drop the agent called TEST_GROUP.

DROP AGENT GROUP TEST_GROUP;

7.1.16 DROP REMOTE SUBSCRIPTION Statement [Smart


Data Integration]

The DROP REMOTE SUBSCRIPTION statement drops an existing remote subscription.

Syntax

DROP REMOTE SUBSCRIPTION [<schema_name>.]<subscription_name>

Syntax Elements

<subscription_name>

Administration Guide
184 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
The name of the remote subscription.

<subscription_name> ::= <identifier>

Description

The DROP REMOTE SUBSCRIPTION statement drops an existing remote subscription. If the remote
subscription is actively receiving changes from source table, then a RESET command is automatically called
before dropping it.

Permissions

This statement requires the DROP object privilege on the remote source.

Example

Drop the remote subscription TEST_SUB.

DROP REMOTE SUBSCRIPTION TEST_SUB;

7.1.17 GRANT Statement [Smart Data Integration]

GRANT is used to grant privileges and structured privileges to users and roles. GRANT is also used to grant
roles to users and other roles.

The GRANT SQL statement is available for use in other areas of SAP HANA, not only SAP HANA smart data
integration. Refer to the GRANT topic for complete information. The information below is specific to smart data
integration functionality.

Syntax

Refer to the GRANT topic for complete information about GRANT syntax.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 185
Syntax Elements

Syntax elements specific to smart data integration are described below. For information about syntax elements
that are not specific to smart data integration, refer to the GRANT topic.
<system_privilege>

System privileges are used to restrict administrative tasks.

<system_privilege> ::= ADAPTER ADMIN | AGENT ADMIN

The table below describes the supported system privileges.

System Privilege Privilege Purpose

ADAPTER ADMIN Controls the execution of the following adapter-related com­


mands: CREATE ADAPTER, DROP ADAPTER and ALTER
ADAPTER. Also allows access to ADAPTERS and ADAPTER_LO­
CATIONS system views.

AGENT ADMIN Controls the execution of the following agent-related com­


mands: CREATE AGENT, DROP AGENT and ALTER AGENT. Also
allows access to AGENTS and ADAPTER_LOCATIONS system
views.

<source_privilege>

Source privileges are used to restrict the access and modifications of a source entry.

<source_privilege> ::= CREATE REMOTE SUBSCRIPTION | PROCESS


REMOTE SUBSCRIPTION EXCEPTION

Source Privilege Privilege Purpose

CREATE REMOTE SUBSCRIP­ This privilege allows the creation of remote subscriptions exe­
TION cuted on this source entry. Remote subscriptions are created in
a schema and point to a virtual table or SQL on tables to cap­
ture change data.

PROCESS REMOTE SUB­ This privilege allows processing exceptions on this source entry.
SCRIPTION EXCEPTION Exceptions that are relevant for all remote subscriptions are
created for a remote source entry.

<object_privilege>

Object privileges are used to restrict the access and modifications on database objects.
Database objects are tables, views, sequences, procedures, and so on.

<object_privilege> ::= AGENT MESSAGING | PROCESS REMOTE


SUBSCRIPTION EXCEPTION

The table below describes the supported object privileges.

Object Privilege Privilege Purpose Command Types

AGENT MESSAGING Authorizes the user with which the agent com­ DDL
municates with the data provisioning server us­
ing HTTP protocol.

Administration Guide
186 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Object Privilege Privilege Purpose Command Types

PROCESS REMOTE Authorizes processing exceptions of a remote DDL


SUBSCRIPTION EX­ subscription.
CEPTION

Not all object privileges are applicable to all kinds of database objects. To learn which
object types allow which privilege to be used, see the table below.

Remote
Schem Function / Subscrip­
Privilege a Table View Sequence Procedure tion Agent

AGENT MESSAGING -- -- -- -- -- -- YES

PROCESS REMOTE -- -- -- -- -- YES --


SUBSCRIPTION EX­
CEPTION

Related Information

GRANT Statement (Access Control) (SAP HANA SQL and System Views Reference)

7.1.18 PROCESS REMOTE SUBSCRIPTION EXCEPTION


Statement [Smart Data Integration]

The PROCESS REMOTE SUBSCRIPTION EXCEPTION statement allows the user to indicate how an exception
should be processed.

Syntax

PROCESS REMOTE SUBSCRIPTION EXCEPTION <exception_id> { RETRY | IGNORE }

Syntax Elements

<exception_id>

The exception ID for remote subscription or remote source.

<exception_id> ::= <integer_literal>

RETRY Indicates to retry the current failed operation. If the failure is due to opening a
connection to a remote source, then the connection is established. If the failure

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 187
happens when applying changed data to a target table, then the RETRY operation
retries the transaction again on the target table.

IGNORE Indicates to ignore the current failure. If the failure happens when applying
changed data to a target table, then the IGNORE operation skips the current
transaction and proceeds with the next transaction. The exception is cleared.

Description

The PROCESS REMOTE SUBSCRIPTION EXCEPTION statement allows the user to indicate how an exception
should be processed.

Permissions

This statement requires the PROCESS REMOTE SUBSCRIPTION EXCEPTION object privilege on the remote
source.

Example

Ignore exception 101.

PROCESS REMOTE SUBSCRIPTION EXCEPTION 101 IGNORE;

7.1.19 SESSION_CONTEXT Function [Smart Data


Integration]

Returns the value of session_variable assigned to the current user.

SESSION_CONTEXT is available for use in other areas of SAP HANA, not only SAP HANA smart data
integration. Refer to the SESSION_CONTEXT topic for complete information. The information below is specific
to smart data integration functionality.

Syntax

SESSION_CONTEXT(<session_variable>)

Administration Guide
188 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Description

A predefined session variables that is set by the server and is read-only (cannot be SET or UNSET) is
‘TASK_EXECUTION_ID’.

Related Information

SESSION_CONTEXT Function (Miscellaneous) (SAP HANA SQL and System Views Reference)

7.1.20 START TASK Statement [Smart Data Integration]

Starts a task.

Syntax

START TASK <task_name> [ASYNC] [(<var_list>)] [PROCEDURE


PARAMETERS(<param_list>)]

Syntax Elements

<task_name>

The identifier of the task to be called, with optional schema name.

<task_name> ::= [<schema_name>.]<identifier>


<schema_name> ::= <identifier>

<var_list>

Specifies one or more start task variables. Variables passed to a task are scalar
constants. Scalar parameters are assumed to be NOT NULL.

<var_list> ::= <start_task_var>[{, <start_task_var>}...]

<start_task_var> Specifies the name and value for a start task variable. A task
can contain variables that allow for dynamic replacement of
task plan parameters. This section is where, at run time during

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 189
START TASK, the values that should be used for those variables
can be provided.

<start_task_var> ::= <var_name> =>


<var_value>

<var_name> Name of variable that was defined.

<var_name> ::= <identifier>

<var_value> Value that should be used in place of the variable name


specified when executing the task.

<var_value> ::= <string_literal>

<param_list>

Specifies one or more start task parameters.

<param_list> ::= <start_task_param>[{, <start_task_param>}...]

<start_task_param> ::= <identifier>

Task parameters. If the task uses table types for input and/or output, then those need
to be specified within this section. For more information about these data types, see
BNF Lowest Terms Representations and Data Types in the Notation topic.

Parameters are implicitly defined as either IN or OUT, as inferred from the task plan.
Arguments for IN parameters could be anything that satisfies the schema of the input
table type (for example, a table variable internal to the procedure, or a temporary
table). The actual value passed for tabular OUT parameters can be, for example, '?', a
physical table name, or a table variable defined inside the procedure.

Description

Starts a task.

START TASK when executed by the client the syntax behaves in a way consistent with the SQL standard
semantics, e.g. Java clients can call a procedure using a JDBC CallableStatement. Scalar output variables are a
scalar value that can be retrieved from the callable statement directly.

 Note

Unquoted identifiers are implicitly treated as uppercase. Quoting identifiers will respect capitalization and
allow for using white spaces which are normally not allowed in SQL identifiers.

Administration Guide
190 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Permissions

This statement requires the EXECUTE privilege on the schema in which the task was created.

Examples

The TASK performTranslation was already created, and the task plan has two table type input parameters and a
single table type output parameter. You call the performTranslation task passing in the table types to use for
execution.

START TASK performTranslation PROCEDURE PARAMETERS (in1, in2, out1);

SQL Script

You can call START TASK within the SQL Script CREATE PROCEDURE. Refer to the SAP HANA SQL Script
Reference for complete details about CREATE PROCEDURE.

<proc_sql> now includes <start_task>:

<proc_sql> ::= <subquery>


| <select_into_stmt>
| <insert_stmt>
| <delete_stmt>
| <update_stmt>
| <replace_stmt>
| <call_stmt>
| <create_table>
| <drop_table>
| <start_task>

START TASK is not supported in:

● Table UDF
● Scalar UDF
● Trigger
● Read-only procedures

TASK_EXECUTION_ID session variable

The TASK_EXECUTION_ID session variable provides a unique task execution ID. Knowing the proper task
execution ID is critical for various pieces of task functionality including querying for side­effect information and
task processing status, and canceling a task.

TASK_EXECUTION_ID is a read-only session variable. Only the internal start task code updates the value.

The value of TASK_EXECUTION_ID will be set during the START TASK command execution. In the case of
asynchronous execution (START TASK ASYNC), the value is updated before the command returns so it is

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 191
available before the actual task has finished asynchronously running. If the execution of START TASK was
successful, then the value is updated to the unique execution ID for that START TASK execution. If the
execution of START TASK was unsuccessful, then the TASK_EXECUTION_ID variable will be set back to the
state as if no START TASK was run.

The users can obtain the value of TASK_EXECUTION_ID by using either of the following:

● The already existing SESSION_CONTEXT() function. If this function is used and if no tasks have been run
or a task was run and it was unsuccessful, then a NULL value will be returned.
● The M_SESSION_CONTEXT monitoring view. This would need to be queried using a KEY value of
“TASK_EXECUTION_ID”. If no row exists with that key, then that means that the session variable hasn’t
been set (no tasks run or last task execution was unsuccessful).

 Note

Session variables are string values. The user needs to cast appropriately based on how they want to use the
value.

Table 34: Examples


Action SQL

Obtain the last task execution ID


SELECT SESSION_CONTEXT(‘TASK_EXECUTION_ID’) FROM dummy;

See monitoring information for


the last task that was executed
SELECT * FROM M_TASKS WHERE TASK_EXECUTION_ID = CAST
(SESSION_CONTEXT(‘TASK_EXECUTION_ID’) AS BIGINT);
(with type casting)

Cancel the last task that was exe­


cuted (with type casting)
CANCEL TASK CAST(SESSION_CONTEXT(‘TASK_EXECUTION_ID’)
AS BIGINT);

Related Information

SQL Notation Conventions (SAP HANA SQL and System Views Reference)

7.2 System Views

System views allow you to query for various information about the system state using SQL commands. The
results appear as tables.

System views are located in the SYS schema. In a system with tenant databases, every database has a SYS
schema with system views that contain information about that database only. In addition, the system database
has a further schema, SYS_DATABASES, which contains views for monitoring the system as a whole. The views
in the SYS_DATABASES schema provide aggregated information from a subset of the views available in the SYS
schema of all tenant databases in the system. These union views have the additional column DATABASE_NAME
to allow you to identify to which database the information refers. To be able to view information in these views,
you need the system privilege CATALOG READ or DATABASE ADMIN.

Administration Guide
192 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
SAP HANA system views are separated into two categories: metadata views and runtime views. Metadata
views provide metadata about objects in the database, including options or settings that were set using a DDL
statement. Runtime views provide actual HANA runtime data, including statistics and status information
related to the execution of DML statements. Runtime views start with M_ for monitoring.

ADAPTER_CAPABILITIES System View [Smart Data Integration] [page 195]


Specifies the SQL capabilities of the adapters stored in the system.

ADAPTER_LOCATIONS System View [Smart Data Integration] [page 196]


Specifies the location of adapters.

ADAPTERS System View [Smart Data Integration] [page 196]


Stores adapters available in the SAP HANA system.

AGENT_CONFIGURATION System View [Smart Data Integration] [page 197]


Agent configuration

AGENT_GROUPS System View [Smart Data Integration] [page 197]


Lists active data provisioning agent groups in the system.

AGENTS System View [Smart Data Integration] [page 197]


Lists active data provisioning agents in the system.

M_AGENTS System View [Smart Data Integration] [page 198]


Provides the status of all agents registered in the SAP HANA database.

M_REMOTE_SOURCES System View [Smart Data Integration] [page 198]


Stores dictionary status information, remote source owner information, and the status of data
collection.

M_REMOTE_SUBSCRIPTION_COMPONENTS System View [Smart Data Integration] [page 199]


Provides the status of a remote subscription for each internal component.

M_REMOTE_SUBSCRIPTION_STATISTICS System View [Smart Data Integration] [page 200]


Provides details of current processing details of a remote subscription (e.g. number of messages or
transactions received, applied since the start of the SAP HANA database).

M_REMOTE_SUBSCRIPTIONS System View [Smart Data Integration] [page 200]


Provides the status and run-time information of a remote subscription.

M_SESSION_CONTEXT System View [Smart Data Integration] [page 202]


Session variables for each connection

REMOTE_SOURCE_OBJECT_COLUMNS System View [Smart Data Integration] [page 203]


If the adapter can provide column-level information for each table in the remote system, once this
dictionary is built you can search for relationships between tables. This table is useful for analyzing
relationships between tables in the remote source.

REMOTE_SOURCE_ OBJECT_DESCRIPTIONS System View [Smart Data Integration] [page 203]


Stores description of browsable node in different languages.

REMOTE_SOURCE_OBJECTS System View [Smart Data Integration] [page 204]


Stores browsable nodes as well as importable objects (virtual tables). This view is built from remote
source metadata dictionaries.

REMOTE_SOURCES System View [Smart Data Integration] [page 204]


Remote sources

REMOTE_SUBSCRIPTION_EXCEPTIONS System View [Smart Data Integration] [page 205]

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 193
Provides details about an exception that occurred during the execution of a remote subscription. The
exceptions can be processed using the PROCESS REMOTE SUBSCRIPTION EXCEPTION SQL
statement.

REMOTE_SUBSCRIPTIONS System View [Smart Data Integration] [page 206]


Lists all the remote subscriptions created for a remote source.

TASK_CLIENT_MAPPING System View [Smart Data Integration] [page 206]


Provides the client mapping when a task is created by the ABAP API.

TASK_COLUMN_DEFINITIONS System View [Smart Data Integration] [page 207]


Defines the columns present in a particular table.

TASK_EXECUTIONS System View [Smart Data Integration] [page 207]


Task-level run-time statistics generated when START TASK is run.

TASK_LOCALIZATION System View [Smart Data Integration] [page 208]


Contains localized values for the task framework tables.

TASK_OPERATIONS System View [Smart Data Integration] [page 209]


Contains all operations that exist for a given task, as well as details about those operations.

TASK_OPERATIONS_EXECUTIONS System View [Smart Data Integration] [page 209]


Operations-level task statistics generated when START TASK is run.

TASK_PARAMETERS System View [Smart Data Integration] [page 210]


Details about the task parameters view

TASK_TABLE_DEFINITIONS System View [Smart Data Integration] [page 211]


Contains all of the tables used by the various side­effect producing operation.

TASK_TABLE_RELATIONSHIPS System View [Smart Data Integration] [page 212]


Defines the relationships, if any, between tables within an operation.

TASKS System View [Smart Data Integration] [page 212]


Details about tasks.

VIRTUAL_COLUMN_PROPERTIES System View [Smart Data Integration] [page 213]


Lists the properties of the columns in a virtual table sent by the adapter via CREATE VIRTUAL TABLE
SQL statement.

VIRTUAL_TABLE_PROPERTIES System View [Smart Data Integration] [page 214]


Lists the properties of a virtual table sent by the adapter via the CREATE VIRTUAL TABLE SQL
statement.

BEST_RECORD_GROUP_MASTER_STATISTICS System View [Smart Data Quality] [page 214]


Contains a summary of Best Record group master statistics.

BEST_RECORD_RESULTS System View [Smart Data Quality] [page 215]


Contains governance information for every column in every record that is updated in the best record
process.

BEST_RECORD_STRATEGIES System View [Smart Data Quality] [page 216]


Contains information on which strategies are used in each strategy group and in which order.

CLEANSE_ADDRESS_RECORD_INFO System View [Smart Data Quality] [page 217]


Describes how well an address was assigned as well as the type of address.

CLEANSE_CHANGE_INFO System View [Smart Data Quality] [page 218]


Describes the changes made during the cleansing process.

Administration Guide
194 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
CLEANSE_COMPONENT_INFO System View [Smart Data Quality] [page 219]
Identifies the location of parsed data elements in the input and output.

CLEANSE_INFO_CODES System View [Smart Data Quality] [page 220]


Contains one row per info code generated by the cleansing process.

CLEANSE_STATISTICS System View [Smart Data Quality] [page 221]


Contains a summary of Cleanse statistics.

GEOCODE_INFO_CODES System View [Smart Data Quality] [page 222]


Contains one row per info code generated by the geocode transformation process.

GEOCODE_STATISTICS System View [Smart Data Quality] [page 223]


Contains a summary of Geocode statistics.

MATCH_GROUP_INFO System View [Smart Data Quality] [page 223]


Contains one row for each match group.

MATCH_RECORD_INFO System View [Smart Data Quality] [page 224]


Contains one row for each matching record per level.

MATCH_SOURCE_STATISTICS System View [Smart Data Quality] [page 225]


Contains counts of matches within and between data sources.

MATCH_STATISTICS System View [Smart Data Quality] [page 225]


Contains statistics regarding the run of the transformation operation.

MATCH_TRACING System View [Smart Data Quality] [page 226]


Contains one row for each match decision made during the matching process.

Related Information

System Views Reference for Additional SAP HANA Contexts (SAP HANA SQL and System Views Reference)

7.2.1 ADAPTER_CAPABILITIES System View [Smart Data


Integration]

Specifies the SQL capabilities of the adapters stored in the system.

Structure

Column Data type Description

ADAPTER_NAME NVARCHAR(64) Adapter name

SOURCE_VERSION NVARCHAR(64) Source versions supported by the adapter

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 195
7.2.2 ADAPTER_LOCATIONS System View [Smart Data
Integration]

Specifies the location of adapters.

Structure

Column Data type Description

ADAPTER_NAME NVARCHAR(64) Adapter name

LOCATION VARCHAR(11) Location of the adapter: 'indexserver', 'dpserver', 'agent'

AGENT_NAME NVARCHAR(256) Agent name

7.2.3 ADAPTERS System View [Smart Data Integration]

Stores adapters available in the SAP HANA system.

Structure

Column Data type Description

ADAPTER_NAME NVARCHAR(64) Adapter name

PROPERTIES NVARCHAR(1000) Optional properties of the adapter such as display_name


and description

CONFIGURATION NCLOB UI properties that must be displayed when configuring re­


mote data source

IS_SYSTEM_ADAPTER VARCHAR(5) Specifies whether the adapter is a system adapter:


'TRUE'/'FALSE'

IS_ESS_DEFINITION_SUP­ VARCHAR(5) Specifies if the procedure GET_REMOTE_SOURCE_TA­


PORTED BLE_ESS_DEFINITIONS is enabled for remote sources cre­
ated using this adapter: 'TRUE'/'FALSE'

Administration Guide
196 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
7.2.4 AGENT_CONFIGURATION System View [Smart Data
Integration]

Agent configuration

Structure

Column name Data type Description

AGENT_NAME NVARCHAR(256) Agent name

KEY VARCHAR(128) Agent property key

VALUE NCLOB Agent property value

7.2.5 AGENT_GROUPS System View [Smart Data Integration]

Lists active data provisioning agent groups in the system.

Structure

Column Data type Description

AGENT_GROUP_NAME NVARCHAR(256) Name of the agent group.

7.2.6 AGENTS System View [Smart Data Integration]

Lists active data provisioning agents in the system.

Structure

Column Data type Description

AGENT_NAME NVARCHAR(256) Agent name

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 197
Column Data type Description

PROTOCOL VARCHAR(4) Protocol for communication with SAP HANA database:


'TCP', 'HTTP'

AGENT_HOST NVARCHAR (64) Agent host specified when using TCP

AGENT_PORT INTEGER Agent port specified when using TCP

IS_SSL_ENABLED VARCHAR(5) Specifies whether the agent listening on TCP port uses SSL

AGENT_GROUP_NAME NVARCHAR(256) Agent clustering group to which the agent belongs.

7.2.7 M_AGENTS System View [Smart Data Integration]

Provides the status of all agents registered in the SAP HANA database.

Structure

Column Data type Description

AGENT_NAME NVARCHAR(256) Agent name

FREE_PHYSICAL_MEMORY BIGINT Free physical memory on the host

FREE_SWAP_SPACE BIGINT Free swap memory on the host

LAST_CONNECT_TIME TIMESTAMP The last time the session cookie was used for suc­
cessful re-connection

SYS_TIMESTAMP TIMESTAMP Host timestamp in local time zone

USED_PHYSICAL_MEMORY BIGINT Used physical memory on the host

USED_SWAP_SPACE BIGINT Used swap memory on the host

UTC_TIMESTAMP TIMESTAMP Host timestamp in UTC

AGENT_VERSION VARCHAR(32) Agent version

AGENT_STATUS VARCHAR(16) Agent status

7.2.8 M_REMOTE_SOURCES System View [Smart Data


Integration]

Stores dictionary status information, remote source owner information, and the status of data collection.

 Note

This system view is for keeping track of the status of metadata dictionaries for remote sources. If there is
no dictionary for a given remote source, it will not appear in the view.

Administration Guide
198 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
For basic remote source information you can select from REMOTE_SOURCES. It includes the following.

● REMOTE_SOURCE_NAME
● ADAPTER_NAME
● CONNECTION_INFO
● AGENT_GROUP_NAME

Structure

Column Data type Description

USER_NAME NVARCHAR(256) User name

REMOTE_SOURCE_NAME NVARCHAR(256) Remote source name

LAST_REFRESH_TIME TIMESTAMP The successful completion timestamp of the refresh


operation

REFRESH_START_TIME TIMESTAMP The timestamp of when the refresh operation was


executed

REFRESH_STATUS VARCHAR(32) Refresh operation status:

● STARTED
● COMPLETED
● RUNNING (GET OBJECTS)
● RUNNING (GET OBJECT DETAILS)
● FAILED
● CANCELLED
● CLEARED

REFRESH_ERROR_MESSAGE NVARCHAR(2000) Exception message that occurred during refresh op­


eration

7.2.9 M_REMOTE_SUBSCRIPTION_COMPONENTS System


View [Smart Data Integration]

Provides the status of a remote subscription for each internal component.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Remote subscription schema name

SUBSCRIPTION_NAME NVARCHAR(256) Remote subscription name

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 199
Column Data type Description

COMPONENT VARCHAR(10) ● DPSERVER


● ADAPTER
● RECEIVER
● APPLIER

STATUS VARCHAR Component status

MESSAGE VARCHAR Additional information

7.2.10 M_REMOTE_SUBSCRIPTION_STATISTICS System View


[Smart Data Integration]

Provides details of current processing details of a remote subscription (e.g. number of messages or
transactions received, applied since the start of the SAP HANA database).

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Remote subscription schema name

SUBSCRIPTION_NAME NVARCHAR(256) Remote subscription name

RECEIVED_MESSAGE_COUNT BIGINT Total message/transaction count received by the cur­


rent connection

RECEIVED_MESSAGE_SIZE BIGINT Total size of messages/transactions received by the cur­


rent connection

APPLIED_MESSAGE_COUNT BIGINT Total number of messages/transactions applied

APPLIED_MESSAGE_SIZE BIGINT Total size of messages/records applied

REJECTED_MESSAGE_COUNT BIGINT Total number of messages/records rejected

LAST_MESSAGE_RECEIVED TIMESTAMP Time at which the last message/transaction is received

LAST_MESSAGE_APPLIED TIMESTAMP Time at which the last message/transaction is applied

RECEIVER_LATENCY BIGINT Receiver latency in microseconds

APPLIER_LATENCY BIGINT Applier latency in microseconds

7.2.11 M_REMOTE_SUBSCRIPTIONS System View [Smart


Data Integration]

Provides the status and run-time information of a remote subscription.

Administration Guide
200 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Remote subscription schema name

SUBSCRIPTION_NAME NVARCHAR(256) Remote subscription name

STATE VARCHAR(256) State of event

OPTIMIZED_QUERY_STRING NCLOB This is generated and saved so that if there


are multiple subscriptions interested in
same query result, and the same inter­
nal_distribution_id, both the subscriptions
can use the same result.

OPTIMIZED_QUERY_HASH VARCHAR(128) Hash is used to query the match for opti­


mized query string

INTERNAL_DISTRIBUTION_ID BIGINT Generated integer to identify if multiple tar­


get tables are interested in the changes
from same source SQL or virtual table

OPTIMIZED_QUERY_RESULTSET_TYPE TINYINT 0 - REGULAR

1 - CLUSTER

2 - POOL

REMOTE_SUBSCRIPTION NVARCHAR(256) An optional subscription name registered


by the adapter in the remote source system

VOLUME_ID INTEGER Persistence Volume ID

BEGIN_MARKER VARCHAR(64) Generated begin marker in the format


B<remote_source_oid>_<remote_
subscription_oid>_<YYYYMMDDHH
24MMSSFF7> when QUEUE command is
called.

END_MARKER VARCHAR(64) Generated end marker in the format


E<remote_source_oid>_<remote_
subscription_oid>_<YYYYMMDDHH
24MMSSFF7> when DISTRIBUTE com­
mand is called.

BEGIN_MARKER_TIME TIMESTAMP Timestamp when QUEUE request is re­


ceived.

END_MARKER_TIME TIMESTAMP Timestamp when DISTRIBUTE command is


called.

LAST_PROCESSED_TRANSACTION_ID VARBINARY(128) Transaction ID of the last processed trans­


action.

LAST_PROCESSED_TRANSACTION_TIME TIMESTAMP Time when the last transaction was applied.

LAST_PROCESSED_BEGIN_SEQUENCE_ID VARBINARY(68) Last processed transaction's begin record


sequence ID

LAST_PROCESSED_COMMIT_SEQUENCE_ID VARBINARY(68) Last processed transaction's commit re­


cord sequence ID

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 201
Column Data type Description

LAST_RECEIVED_SEQUENCE_ID VARBINARY(68) Last received sequence ID

LAST_RECEIVED_CUSTOM_ID NVARCHAR(64) Last received custom ID. Custom IDs may


be used by adapters with every changed-
data row of a transaction.

LAST_PROCESSED_CUSTOM_ID NVARCHAR(64) Last processed custom ID. Custom IDs may


be used by adapters with every changed-
data row of a transaction.

7.2.12 M_SESSION_CONTEXT System View [Smart Data


Integration]

Session variables for each connection

 Note

The M_SESSION_CONTEXT view is available for use in other areas of SAP HANA, not only SAP HANA
smart data integration. Refer to the M_SESSION_CONTEXT topic for complete information. The
information below is specific to smart data integration functionality.

This view shows session variables of all open connections.

Each variable is categorized in SECTION column to USER (user defined variable using SET command or client
API call) or SYSTEM (predefined variable or system property).

Table 35: Predefined variables


Set by
Variable Name (M_SES­ Value Con­ Client or Shown in M_SESSION_CON­ Server
SION_CONTEXT.KEY) straint Server TEXT Usage Description

TASK_EXECUTION_ID bigint server yes START Shows unique


TASK task execution
ID

Related Information

M_SESSION_CONTEXT System View (SAP HANA SQL and System Views Reference)

Administration Guide
202 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
7.2.13 REMOTE_SOURCE_OBJECT_COLUMNS System View
[Smart Data Integration]

If the adapter can provide column-level information for each table in the remote system, once this dictionary is
built you can search for relationships between tables. This table is useful for analyzing relationships between
tables in the remote source.

Structure

Column Data type Description

USER_NAME NVARCHAR(256) For secondary credentials, need to know the owner


name

REMOTE_SOURCE_NAME NVARCHAR(256) To uniquely identify a remote source

OBJECT_NAME NVARCHAR(5000) Unique name to identify remote source object

COLUMN_NAME NVARCHAR(256) Column name

DATA_TYPE_NAME VARCHAR(16) SAP HANA data type

REMOTE_DATA_TYPE_NAME VARCHAR(32) Remote source data type

REMOTE_CONTENT_TYPE NVARCHAR(256) Examples include address, unit of measure, user-de­


fined types, ZIP code, and so on

LENGTH INTEGER Length/precision of the column

SCALE INTEGER Scale of the column

IS_NULLABLE VARCHAR(5) Various column properties

IS_AUTOINCREMENT

7.2.14 REMOTE_SOURCE_ OBJECT_DESCRIPTIONS System


View [Smart Data Integration]

Stores description of browsable node in different languages.

Structure

Column Data type Description

USER_NAME NVARCHAR(256) User name

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 203
Column Data type Description

REMOTE_SOURCE_NAME NVARCHAR(256) Remote source name

OBJECT_NAME NVARCHAR(5000) Unique name to identify remote source object

LANGUAGE_CODE VARCHAR(2) Language code

DESCRIPTION NVARCHAR(5000) Description of this object

7.2.15 REMOTE_SOURCE_OBJECTS System View [Smart Data


Integration]

Stores browsable nodes as well as importable objects (virtual tables). This view is built from remote source
metadata dictionaries.

Structure

Column Data type Description

USER_NAME NVARCHAR(256) User name

REMOTE_SOURCE_NAME NVARCHAR(256) Remote source name

OBJECT_NAME NVARCHAR(5000) Unique name to identify remote source object

DISPLAY_NAME NVARCHAR(256) Display name for this object

IS_IMPORTABLE VARCHAR(5) If the object is importable as a virtual table:


'TRUE'/'FALSE'

IS_EXPANDABLE VARCHAR(5) If the object can be expanded or browsed to get inner


objects: 'TRUE/'FALSE'

PARENT_OBJECT_NAME NVARCHAR(5000) The parent object name for this object

DEFINITION_TYPE VARCHAR(32) Object definition type

DEFINITION NCLOB Object definition

7.2.16 REMOTE_SOURCES System View [Smart Data


Integration]

Remote sources

Administration Guide
204 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column name Data type Description

REMOTE_SOURCE_NAME NVARCHAR(256) Remote source name

ADAPTER_NAME NVARCHAR(256) Adapter name

CONNECTION_INFO NVARCHAR(256) Connection information

AGENT_GROUP_NAME NVARCHAR(256) Name of the agent group name

Related Information

REMOTE_SOURCES System View (SAP HANA SQL and System Views Reference)

7.2.17 REMOTE_SUBSCRIPTION_EXCEPTIONS System View


[Smart Data Integration]

Provides details about an exception that occurred during the execution of a remote subscription. The
exceptions can be processed using the PROCESS REMOTE SUBSCRIPTION EXCEPTION SQL statement.

Structure

Column Data type Description

EXCEPTION_OID BIGINT Exception ID

OBJECT_TYPE VARCHAR(19) 'REMOTE SOURCE', 'REMOTE SUBSCRIPTION'

OBJECT_SCHEMA_NAME NVARCHAR(256) Schema name of remote source or remote subscription based


on OBJECT_TYPE

OBJECT_NAME NVARCHAR(256) Object name of remote source or remote subscription based on


OBJECT_TYPE

EXCEPTION_TIME TIMESTAMP Time at which the exception was raised

ERROR_NUMBER INTEGER Error number

ERROR_MESSAGE NVARCHAR(2000) Error message

COMPONENT VARCHAR(8) Component that raised the exception

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 205
7.2.18 REMOTE_SUBSCRIPTIONS System View [Smart Data
Integration]

Lists all the remote subscriptions created for a remote source.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Remote subscription schema name

SUBSCRIPTION_NAME NVARCHAR(256) Remote subscription name

OWNER_NAME NVARCHAR(256) Owner name

REMOTE_SOURCE_NAME NVARCHAR(256) Remote source name

IS_VALID VARCHAR(5) Specifies whether the remote subscription is valid or


not. This becomes FALSE when its source or target
objects are changed or dropped.

SUBSCRIPTION_TYPE VARCHAR(13) Remote subscription type

VIRTUAL_TABLE_SCHEMA_NAME NVARCHAR(256) Virtual table schema name

VIRTUAL_TABLE_NAME NVARCHAR(256) Virtual table name

SUBSCRIPTION_QUERY_STRING NCLOB Select statement specified in the subscription when


subscription type is SQL

TARGET_OBJECT_TYPE VARCHAR(9) Remote subscription target object type: 'TABLE',


'PROCEDURE', 'TASK'

TARGET_OBJECT_SCHEMA_NAME NVARCHAR(256) Target object schema name

TARGET_OBJECT_NAME NVARCHAR(256) Target object name

TARGET_OTHER_PARAM_STRING NVARCHAR(4000) Constant parameter string to pass at execution when


target object type is PROCEDURE or TASK

TASK_PROCEDURE_PARAMETERS NVARCHAR(5000) A comma-separated list of task parameters.

7.2.19 TASK_CLIENT_MAPPING System View [Smart Data


Integration]

Provides the client mapping when a task is created by the ABAP API.

Administration Guide
206 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

CLIENT NVARCHAR(128) Name of the client that created the task with the ABAP API

7.2.20 TASK_COLUMN_DEFINITIONS System View [Smart


Data Integration]

Defines the columns present in a particular table.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

COLUMN_NAME NVARCHAR(128) Name of the column used in the task plan within a table

MAPPED_NAME NVARCHAR(128) Mapped name of the column used in a task plan within a ta­
ble

7.2.21 TASK_EXECUTIONS System View [Smart Data


Integration]

Task-level run-time statistics generated when START TASK is run.

TASK_EXECUTIONS shows one record per task plan execution.

Data in this view is updated while the task is in progress. For example, STATUS, PROCESSED_RECORDS, and
TOTAL_PROGRESS_PERCENT are continuously updated until the task is complete.

Users may view information only for tasks that they ran themselves or were granted permissions to view.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 207
Structure

Column Data type Description

HOST VARCHAR(64) Host name

PORT INTEGER Internal port

SCHEMA_NAME NVARCHAR(256) Schema name used in the task

TASK_NAME NVARCHAR(256) Name of the task

CONNECTION_ID INTEGER Connection identifier

TRANSACTION_ID INTEGER Transaction identifier used for the task execution

TASK_EXECUTION_ID BIGINT Task execution unique identifier

PARENT_TASK_EXECUTION_ID BIGINT Parent task identifier

IS_ASYNC VARCHAR(5) TRUE if the task is asynchronous, else FALSE

PARAMETERS NVARCHAR(5000) Input parameters for the task

PROCEDURE_PARAMETERS NVARCHAR(5000) Displays the input <param-list> values that were speci­
fied in the START TASK SQL command

START_TIME TIMESTAMP Start time of the task

END_TIME TIMESTAMP End time of the task

DURATION BIGINT Execution time of the task (microseconds)

STATUS VARCHAR(16) Status of the task: STARTING, RUNNING, FAILED, COM­


PLETED, CANCELLING, or CANCELLED

CURRENT_OPERATION NVARCHAR(128) Current operation of the task

PROCESSED_RECORDS BIGINT Total number of records processed

TOTAL_PROGRESS_PERCENT BIGINT Total task progress (percent)

USER_NAME NVARCHAR(256) User name

APPLICATION_USER_NAME NVARCHAR(256) Application user name

HAS SIDE EFFECTS VARCHAR(5) 'TRUE' if the task produces side effect data, else 'FALSE'

7.2.22 TASK_LOCALIZATION System View [Smart Data


Integration]

Contains localized values for the task framework tables.

Administration Guide
208 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column Data type Description

LOC_TYPE_ID INTEGER Identifier of the type of the entity being localized

LOC_ID NVARCHAR(64) Identifier of the entity being localized

LANGUAGE NVARCHAR(1) One-character code of the localized language

DESCRIPTION NVARCHAR(1024) Localized description

7.2.23 TASK_OPERATIONS System View [Smart Data


Integration]

Contains all operations that exist for a given task, as well as details about those operations.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

COMMENTS NVARCHAR(512) Comments made on the operation

HAS_SIDE_EFFECTS TINYINT Specifies whether the operation has side­effect data

OPERATION_TYPE NVARCHAR(128) Type of operation in the task plan

7.2.24 TASK_OPERATIONS_EXECUTIONS System View


[Smart Data Integration]

Operations-level task statistics generated when START TASK is run.

TASK_OPERATIONS_EXECUTIONS shows one record per operation.

Data in this view is updated while the task is in progress. For example, STATUS, PROCESSED_RECORDS, and
OPERATIONS_PROGRESS_PERCENT are continuously updated until the task is complete.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 209
Users may view information only for tasks that they ran themselves or were granted permissions to view.

Structure

Column Data type Description

HOST VARCHAR(64) Host name

PORT INTEGER Internal port

TASK_EXECUTION_ID BIGINT Task identifier

CONNECTION_ID INTEGER Connection identifier

TRANSACTION_ID INTEGER Transaction identifier used for the task execution

CURRENT_OPERATION NVARCHAR Name of operation

OPERATION_TYPE NVARCHAR(128) Type of operation

OPERATION_NAME NVARCHAR(128) Internal name of operation

START_TIME TIMESTAMP Start time of the task

END_TIME TIMESTAMP End time of the task

DURATION BIGINT Execution time of the task (microseconds)

STATUS VARCHAR(16) Status of the task:

● STARTING
● RUNNING
● FAILED
● COMPLETED
● CANCELLING
● CANCELLED

PROCESSED_RECORDS BIGINT Total number of records processed

OPERATION_PROGRESS_PERCENT DOUBLE Operation progress (percent)

HAS_SIDE_EFFECTS VARCHAR(5) 'TRUE' if the task produces side effect data, else 'FAL­
SE'

7.2.25 TASK_PARAMETERS System View [Smart Data


Integration]

Details about the task parameters view

Administration Guide
210 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Schema in which the task was created

TASK_NAME NVARCHAR(256) Name of task

PARAMETER_NAME NVARCHAR(256) Name of parameter

POSITION INTEGER Position of parameter

TABLE_TYPE_SCHEMA NVARCHAR(256) Schema in which the TableType was created

TABLE_TYPE_NAME NVARCHAR(256) Name of TableType

PARAMETER_TYPE VARCHAR(7) Parameter type: IN or OUT

7.2.26 TASK_TABLE_DEFINITIONS System View [Smart Data


Integration]

Contains all of the tables used by the various side­effect producing operation.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_ID INTEGER Unique identifier for the table

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for an operation

SIDE_EFFECT_SCHEMA NVARCHAR(128) Schema where the generated side­effect table is located

SIDE_EFFECT_NAME NVARCHAR(128) Name of the generated side­effect table

IS_PRIMARY_TABLE TINYINT Specifies whether this table is the primary table in a relation­
ship

OPERATION_TABLE_TYPE NVARCHAR(20) Type of operation that the table is used within

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 211
7.2.27 TASK_TABLE_RELATIONSHIPS System View [Smart
Data Integration]

Defines the relationships, if any, between tables within an operation.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for an operation

RELATED_TABLE_NAME NVARCHAR(128) Name of the table to which the table specified in TA­
BLE_NAME is related

FROM_ATTRIBUTE NVARCHAR(128) Name of the column in the TABLE_NAME table that relates
to the TO_ATTRIBUTE

TO_ATTRIBUTE NVARCHAR(128) Name of the column in the RELATED_TABLE_NAME table


that relates to the FROM_ATTRIBUTE

7.2.28 TASKS System View [Smart Data Integration]

Details about tasks.

Structure

Column Data type Description

TASK_OID BIGINT Unique identifier for a task

TASK_NAME NVARCHAR(256) Name of task

SCHEMA_NAME NVARCHAR(256) Schema the task was created in

OWNER_NAME NVARCHAR(256) Owner of the task

CREATE_TIME TIMESTAMP Creation time

Administration Guide
212 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Column Data type Description

MEMORY_SIZE BIGINT Memory size of loaded task

TASK_TYPE NVARCHAR(64) Type of task ('PLAN' or 'PROCEDURE'), based on how


the task was created

PLAN_VERSION NVARCHAR(32) Version of the task plan

PLAN NCLOB Task plan used to define the task, or task plan gener­
ated to call the procedure

COMMENTS NVARCHAR(256) Description of the task, from the task plan

HAS_TABLE_TYPE_INPUT VARCHAR(5) 'TRUE' if the task is modeled with a table type as in­
put, meaning data would need to be passed at execu­
tion time

HAS SDQ VARCHAR(5) 'TRUE' if the task contains SDQ (smart data quality)
functionality

IS_REALTIME_TASK VARCHAR(5) 'TRUE' if the task is a realtime task, else 'FALSE'

IS_VALID VARCHAR(5) 'TRUE' if the task is in a valid state; 'FALSE if it has


been invalidated by a dependency

IS_READ_ONLY VARCHAR(5) 'TRUE' if the task is read only (has only table type out­
puts), 'FALSE' if it writes to non-table-type outputs

PROCEDURE_SCHEMA NVARCHAR(256) If the task was created with a procedure instead of a


plan, this attribute will contain the schema name of
the stored procedure

PROCEDURE_NAME NVARCHAR(256) If the task was created with a procedure instead of a


plan, this attribute will contain the name of the name
of the stored procedure

INPUT_PARAMETER_COUNT SMALLINT Number of input (tableType) parameters

OUTPUT_PARAMETER SMALLINT Number of output (tableType) parameters

SQL_SECURITY VARCHAR(7) Security model for the task, either 'DEFINER' or 'IN­
VOKER'

7.2.29 VIRTUAL_COLUMN_PROPERTIES System View [Smart


Data Integration]

Lists the properties of the columns in a virtual table sent by the adapter via CREATE VIRTUAL TABLE SQL
statement.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 213
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Schema name of virtual table

TABLE_NAME NVARCHAR(256) Virtual table name

COLUMN_NAME NVARCHAR(256) Virtual table column name

PROPERTY NVARCHAR(256) Property name

VALUE NVARCHAR(512) Property value

7.2.30 VIRTUAL_TABLE_PROPERTIES System View [Smart


Data Integration]

Lists the properties of a virtual table sent by the adapter via the CREATE VIRTUAL TABLE SQL statement.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Schema name of virtual table

TABLE_NAME NVARCHAR(256) Virtual table name

PROPERTY NVARCHAR(256) Property name

VALUE NCLOB Property value. For example:

● Large XSD of size 1M

7.2.31 BEST_RECORD_GROUP_MASTER_STATISTICS System


View [Smart Data Quality]

Contains a summary of Best Record group master statistics.

Administration Guide
214 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

NUM_RECORDS BIGINT Total number of records processed

NUM_GROUP_MASTERS BIGINT Number of group master records processed

NUM_DUPLICATES BIGINT Number of duplicate records processed

NUM_SURVIVORS BIGINT Number of surviving records processed

NUM_NON_MATCH_RE­ BIGINT Number of non-matching records processed


CORDS

7.2.32 BEST_RECORD_RESULTS System View [Smart Data


Quality]

Contains governance information for every column in every record that is updated in the best record process.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

DST_TABLE_NAME NVARCHAR(128) Name of the destination table for the operation

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 215
Column Data type Description

DST_ROW_ID BIGINT Unique identifier for the destination row

DST_COLUMN_NAME NVARCHAR(128) Name of the destination column in the destination table

DST_ROW_TYPE NVARCHAR(1) Identifies how the record was updated or if it was newly cre­
ated

SRC_TABLE_NAME NVARCHAR(128) Name of the source table for the operation

SRC_ROW_ID BIGINT Unique identifier for the source row

SRC_COLUMN_NAME NVARCHAR(128) Name of the source column in the source table

STRATEGY_GROUP_ID INTEGER Identification number that identifies the best record strategy
group

STRATEGY_ID INTEGER Identification number that identifies each strategy listed in


the strategy group

BEST_RECORD_RULE NVARCHAR(256) Name of the rule that updates one or more columns as it is
defined in the best record configuration

ACTION_NAME NVARCHAR(256) Name of the action that updates a column as it is defined in


the best record configuration

UPDATE_NUM INTEGER Number of times the column was updated in the best record
process

OPERATION_TYPE NVARCHAR(1) Identifies how the record was updated in the best record
process

7.2.33 BEST_RECORD_STRATEGIES System View [Smart


Data Quality]

Contains information on which strategies are used in each strategy group and in which order.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

Administration Guide
216 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Column Data type Description

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

STRATEGY_GROUP_NAME NVARCHAR(256) Name of the strategy group as defined in the best record
configuration

STRATEGY_ID INTEGER Identification number that identifies each strategy listed in


the strategy group

STRATEGY_ORDER INTEGER) Order of the strategy as defined in the list of strategies

STRATEGY_NAME NVARCHAR(256) Name of the strategy as defined in the best record configu­
ration

7.2.34 CLEANSE_ADDRESS_RECORD_INFO System View


[Smart Data Quality]

Describes how well an address was assigned as well as the type of address.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 217
Column Data type Description

ENTITY_INSTANCE INTEGER Identifier to differentiate between multiple entities proc­


essed in a row

ENTITY_INSTANCE_OCCUR­ INTEGER Unique identifier to identify the occurrence of an entity


RENCE

DATA_SOURCE NVARCHAR(256) Source where the data was produced

ISO_COUNTRY_2CHAR NVARCHAR(4) Two-character country code

ASSIGNMENT_TYPE NVARCHAR(4) Code that represents the type of an address

ASSIGNMENT_INFORMA­ NVARCHAR(4) Code that specifies the validity of an address


TION

ASSIGNMENT_LEVEL NVARCHAR(4) Code that represents the level to which the address matched
data in the address reference data

7.2.35 CLEANSE_CHANGE_INFO System View [Smart Data


Quality]

Describes the changes made during the cleansing process.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

ENTITY_ID NVARCHAR(12) Identifier describing the type of record that was processed

Administration Guide
218 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Column Data type Description

ENTITY_INSTANCE INTEGER Identifier to differentiate between multiple entities proc­


essed in a row

ENTITY_INSTANCE_OCCUR­ INTEGER Unique identifier to identify the occurrence of an entity


RENCE

COMPONENT_ID NVARCHAR(12) Identification number that refers to data components

COMPONENT_ELEMENT_ID NVARCHAR(12) Identification number that refers to more granular elements


within a component

DATA_SOURCE NVARCHAR(256) Source where the data was produced

CHANGE_SIGNIFICANCE_ID NVARCHAR(12) Identification number that refers to the significance of the


change

7.2.36 CLEANSE_COMPONENT_INFO System View [Smart


Data Quality]

Identifies the location of parsed data elements in the input and output.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

ENTITY_ID NVARCHAR(12) Identifier describing a data attribute such as a person name,


organization name, address and so on.

ENTITY_INSTANCE INTEGER Identifier to differentiate between multiple entities proc­


essed in a row

ENTITY_INSTANCE_OCCUR­ INTEGER Unique identifier to identify the occurrence of an entity


RENCE

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 219
Column Data type Description

DATA_SOURCE NVARCHAR(256) Source where the data originated

COMPONENT_ID NVARCHAR(12) Identification number that refers to data components

COMPONENT_ELEMENT_ID NVARCHAR(12) Identification number that refers to more granular elements


within a component

TABLE_NAME NVARCHAR(128) Name of the input table where the component element was
found

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

COLUMN_NAME NVARCHAR(128) Name of the column in the input table where the component
element was found

COLUMN_START_POSITION INTEGER Starting character of the component element in the input


column

COLUMN_DATA_LENGTH INTEGER Number of characters of the component element in the in­


put column

OUTPUT_TABLE_NAME NVARCHAR(128) Name of the output table where the component element was
written

OUTPUT_COLUMN_NAME NVARCHAR(128) Name of the column in the output table where the compo­
nent element was written

OUTPUT_COL­ INTEGER Starting character of the component element in the output


UMN_START_POSITION column

OUTPUT_COL­ INTEGER Number of characters of the component element in the out­


UMN_DATA_LENGTH put column

7.2.37 CLEANSE_INFO_CODES System View [Smart Data


Quality]

Contains one row per info code generated by the cleansing process.

Administration Guide
220 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

ENTITY_ID NVARCHAR(12) Identifier describing the type of record that was processed

ENTITY_INSTANCE INTEGER Identifier to differentiate between multiple entities proc­


essed in a row

ENTITY_INSTANCE_OCCUR­ INTEGER Unique identifier to identify the occurrence of an entity


RENCE

DATA_SOURCE NVARCHAR(256) Source where the data was produced

INFO_CODE NVARCHAR(10) Information code that gives information about the process­
ing of the record

7.2.38 CLEANSE_STATISTICS System View [Smart Data


Quality]

Contains a summary of Cleanse statistics.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 221
Column Data type Description

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

ENTITY_ID NVARCHAR(12) Identifier describing the type of record that was processed

ENTITY_INSTANCE INTEGER Identifier to differentiate between multiple entities proc­


essed in a row

NUM_RECORDS BIGINT Total number of records processed for the entity instance

NUM_VALIDS BIGINT Number of valid records processed for the entity instance

NUM_SUSPECTS BIGINT Number of suspect records processed for the entity instance

NUM_BLANKS BIGINT Number of blank records processed for the entity instance

NUM_HIGH_SIGNIFI­ BIGINT Number of records with high significance changes for the en­
CANT_CHANGES tity instance

7.2.39 GEOCODE_INFO_CODES System View [Smart Data


Quality]

Contains one row per info code generated by the geocode transformation process.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

Administration Guide
222 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Column Data type Description

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

DATA_SOURCE NVARCHAR(256) Source where the data was produced

INFO_CODE NVARCHAR(10) Information code generated by the geocode transformation


operation

7.2.40 GEOCODE_STATISTICS System View [Smart Data


Quality]

Contains a summary of Geocode statistics.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

NUM_RECORDS BIGINT Total number of records processed

NUM_ASSIGNED BIGINT Number of assigned records processed

NUM_UNASSIGNED BIGINT Number of unassigned records processed

7.2.41 MATCH_GROUP_INFO System View [Smart Data


Quality]

Contains one row for each match group.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 223
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

GROUP_ID INTEGER Group identification number

GROUP_COUNT INTEGER Number of records in the match group

SOURCE_COUNT INTEGER Number of sources represented in the match group

REVIEW_GROUP NVARCHAR(1) Indicates whether the group is flagged for review

CONFLICT_GROUP NVARCHAR(1) Indicates whether the group is flagged for conflict

7.2.42 MATCH_RECORD_INFO System View [Smart Data


Quality]

Contains one row for each matching record per level.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

Administration Guide
224 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Column Data type Description

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

GROUP_ID INTEGER Group identification number

7.2.43 MATCH_SOURCE_STATISTICS System View [Smart


Data Quality]

Contains counts of matches within and between data sources.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

SOURCE_NAME NVARCHAR(256) Data source name

RELATED_SOURCE_NAME NVARCHAR(256) Related data source name

NUM_MATCH_DECISIONS INTEGER Number of comparisons resulting in a match decision be­


tween records in each SOURCE_ID/RELATED_SOURCE_ID
pair

7.2.44 MATCH_STATISTICS System View [Smart Data


Quality]

Contains statistics regarding the run of the transformation operation.

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 225
Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

NUM_RECORDS BIGINT Total number of records processed by the transformation


operation

NUM_MATCH_RECORDS BIGINT Number of records that reside in match groups

NUM_NON_MATCH_RE­ BIGINT Number of non-matching records that do not reside in


CORDS match groups

NUM_MATCH_GROUPS BIGINT Number of match groups identified

NUM_REVIEW_GROUPS BIGINT Number of match groups flagged for review

NUM_NON_RE­ BIGINT Number of match groups not flagged for review


VIEW_GROUPS

NUM_CONFLICT_GROUPS BIGINT Number of match groups flaggged with conflicts

NUM_COMPARISONS_PER­ BIGINT Number of comparisons performed by the transformation


FORMED operation

NUM_MATCH_DECISIONS BIGINT Number of comparisons resulting in a match decision

7.2.45 MATCH_TRACING System View [Smart Data Quality]

Contains one row for each match decision made during the matching process.

Structure

Column Data type Description

SCHEMA_NAME NVARCHAR(256) Name of the schema where the task is located

Administration Guide
226 PUBLIC SQL and System Views Reference for Smart Data Integration and Smart Data Quality
Column Data type Description

TASK_NAME NVARCHAR(256) Name of the task

TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called

OPERATION_NAME NVARCHAR(128) Name of the operation in the task plan

TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation

ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

RELATED_TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for an operation

RELATED_ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan

POLICY_NAME NVARCHAR(256) Name of the match policy that processed the related rows

RULE_NAME NVARCHAR(256) Name of the match rule that processed the related rows

SCORE INTEGER Similarity score of the related rows

Administration Guide
SQL and System Views Reference for Smart Data Integration and Smart Data Quality PUBLIC 227
Important Disclaimers and Legal Information

Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:

● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:

● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.

● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.

Beta and Other Experimental Features


Experimental features are not part of the officially delivered scope that SAP guarantees for future releases. This means that experimental features may be changed by
SAP at any time for any reason without notice. Experimental features are not for productive use. You may not demonstrate, test, examine, evaluate or otherwise use
the experimental features in a live operating environment or with data that has not been sufficiently backed up.
The purpose of experimental features is to get feedback early on, allowing customers and partners to influence the future product accordingly. By providing your
feedback (e.g. in the SAP Community), you accept that intellectual property rights of the contributions or derivative works shall remain the exclusive property of SAP.

Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.

Gender-Related Language
We try not to use gender­specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.

Videos Hosted on External Platforms


Some videos may point to third-party video hosting platforms. SAP cannot guarantee the future availability of videos stored on these platforms. Furthermore, any
advertisements or other content hosted on these platforms (for example, suggested videos or by navigating to other videos hosted on the same site), are not within
the control or responsibility of SAP.

Administration Guide
228 PUBLIC Important Disclaimers and Legal Information
Administration Guide
Important Disclaimers and Legal Information PUBLIC 229
www.sap.com/contactsap

© 2020 SAP SE or an SAP affiliate company. All rights reserved.

No part of this publication may be reproduced or transmitted in any form


or for any purpose without the express permission of SAP SE or an SAP
affiliate company. The information contained herein may be changed
without prior notice.

Some software products marketed by SAP SE and its distributors


contain proprietary software components of other software vendors.
National product specifications may vary.

These materials are provided by SAP SE or an SAP affiliate company for


informational purposes only, without representation or warranty of any
kind, and SAP or its affiliated companies shall not be liable for errors or
omissions with respect to the materials. The only warranties for SAP or
SAP affiliate company products and services are those that are set forth
in the express warranty statements accompanying such products and
services, if any. Nothing herein should be construed as constituting an
additional warranty.

SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.

Please see https://www.sap.com/about/legal/trademark.html for


additional trademark information and notices.

THE BEST RUN

You might also like