SAP HANA System Replication - Setup, Operations and HANA MonitoringLinh Nguyen
SAP HANA Distributed System Replication setup, operations and associated HANA Monitoring of Disaster Recovery (DR) scenario using OZSOFT HANA Management Pack for SCOM
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
The document provides an overview of SAP HANA including its introduction, packaging, scenarios, and deployment options. SAP HANA is an in-memory database platform that combines OLTP and OLAP capabilities to enable real-time analytics across entire software architectures. It can be deployed on-premise, in the cloud, or in a hybrid model to power a variety of real-time analytics and data integration scenarios.
SAP HANA 2 – Upgrade and Operations Part 1 - Exploring Features of the New Co...Linh Nguyen
HANA 2 SPS00 Upgrade & Operations - Part 1 Exploring Features of the new Cockpit. In Part 1, we'll cover the following topics from the Basis perspective:
* Upgrade: Preparation, Update and Post-tasks
* HANA2 Cockpit:
- Installation
- Configuration
- Manager
- Resources
- Groups
- SAP HANA Monitoring and Administration
- Security
- Offline Administration
- Performance Management
- Capture and Replay
- SAP HANA Options
The document provides instructions for configuring single sign-on (SSO) with an SAP HANA database using Kerberos authentication and Microsoft Active Directory. It describes the necessary steps to set up hostname resolution, configure the SAP HANA database server for Kerberos, create an SAP HANA service user in Active Directory, generate a keytab file, create external SAP HANA database users, and verify the SSO configuration. Troubleshooting tips are provided in an appendix. The goal is to enable users to authenticate with the SAP HANA database after logging into the Active Directory domain, without needing to re-enter credentials.
Oracle Weblogic for EBS and obiee (R12.2)Berry Clemens
The document provides an overview of Oracle WebLogic Server and its role in supporting major Oracle applications like Oracle Business Intelligence Enterprise Edition (OBIEE) and Oracle E-Business Suite (EBS). It discusses what WebLogic is, its history and features, how it fits into the Oracle technology stack, how to install and configure it, and how WebLogic is used to host and manage OBIEE and EBS instances. Specific topics covered include WebLogic architecture, security configuration, integration with Oracle Identity Management, and migrating security provisioning between environments.
Hadoop & Greenplum: Why Do Such a Thing?Ed Kohlwey
Greenplum is using Hadoop in several interesting ways as part of a larger big data architecture with EMC Greenplum Database (a scale-out MPP SQL database) and EMC Isilon (a scale-out network-attached storage appliance). After a quick introduction of Greenplum Database and Isilon, I list some ways Greenplum is tightly integrating with Hadoop and why we would want to do such a thing. Integration points discussed include: Greenplum Database external tables to seamlessly access data in HDFS, querying HBase tables natively from Greenplum Database, Greenplum Database having its underlying storage on HDFS, and Isilon OneFS as a seamless replacement for HDFS.
This presentation to give a brief overview of various technology features that constitue SAP In-memory Database HANA. More details about how SAP HANA achieves compression, insert only on delta, columniar/row stores, Data provisioning tools in to HANA appliance. this is more suited for technology teams who will manage SAP HANA appliance.
This document discusses SAP HANA system replication which can be automated using the SUSE High Availability Solution. It provides an overview of SAP HANA scenarios for high availability and disaster recovery. It also summarizes the steps to install and configure SAP HANA system replication using the SUSE clustering and automation tools. New use cases that will be supported in upcoming versions are also presented such as single-tier replication with additional non-production systems and multi-tier cascading replication configurations.
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
This document provides an overview of Terraform and infrastructure as code using Terraform. It discusses what Terraform is, how to get started with Terraform including initializing a Terraform configuration, planning and applying changes, variables, modules, providers and resources. It also covers Terraform state and locking state for multi-user collaboration.
Mastering SAP Monitoring - SAP SLT & RFC Connection MonitoringLinh Nguyen
This document discusses monitoring SAP SLT (system replication) and RFC (remote function call) connections. It describes the limited monitoring capabilities available within SAP, including LTR, LTRC, LTRO and Solution Manager. It then introduces two alternative monitoring solutions - the OZSoft SAP Management Pack for Microsoft SCOM, and IT-Conductor's cloud-based application performance management. Both solutions allow for centralized, automated monitoring of SLT replication tables and RFC destinations without requiring SAP-specific software.
Simplifying Change Data Capture using Databricks DeltaDatabricks
In this talk, we will present recent enhancements to the techniques previously discussed in this blog: https://databricks.com/blog/2018/10/29/simplifying-change-data-capture-with-databricks-delta.html. We will start by discussing the different CDC architectures that can be deployed in concert with Databricks Delta. We will then use notebooks to demonstrate updated CDC SQL and look at performance tuning considerations for both batch as well as streaming CDC pipelines into Delta.
Introduction to extracting data from sap s 4 hana with abap cds viewsLuc Vanrobays
The document provides guidance on extracting data from SAP S/4HANA to SAP BW/4HANA using CDS views. It discusses that SAP has developed a communication scenario requiring configuration in both S/4HANA and BW. Extracts can be done in full or delta mode, with hierarchies supported in full mode. CDS views containing the extractor annotation can be found using the view browser in S/4HANA.
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
Oracle Database 12c introduces several new features including pluggable databases (PDB) that allow multiple isolated databases to be consolidated within a single container database (CDB). It also introduces new administrative privileges (SYSBACKUP, SYSDG, SYSKM) and features such as transparent data encryption, invisible columns, object tables, and enhancements to RMAN and SQL.
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...Databricks
Nowadays, people are creating, sharing and storing data at a faster pace than ever before, effective data compression / decompression could significantly reduce the cost of data usage. Apache Spark is a general distributed computing engine for big data analytics, and it has large amount of data storing and shuffling across cluster in runtime, the data compression/decompression codecs can impact the end to end application performance in many ways.
However, there’s a trade-off between the storage size and compression/decompression throughput (CPU computation). Balancing the data compress speed and ratio is a very interesting topic, particularly while both software algorithms and the CPU instruction set keep evolving. Apache Spark provides a very flexible compression codecs interface with default implementations like GZip, Snappy, LZ4, ZSTD etc. and Intel Big Data Technologies team also implemented more codecs based on latest Intel platform like ISA-L(igzip), LZ4-IPP, Zlib-IPP and ZSTD for Apache Spark; in this session, we’d like to compare the characteristics of those algorithms and implementations, by running different micro workloads as well as end to end workloads, based on different generations of Intel x86 platform and disk.
It’s supposedly to be the best practice for big data software engineers to choose the proper compression/decompression codecs for their applications, and we also will present the methodologies of measuring and tuning the performance bottlenecks for typical Apache Spark workloads.
Apache Flink 101 - the rise of stream processing and beyondBowen Li
This document provides an overview and summary of Apache Flink. It discusses how Flink enables stateful stream processing and beyond. Key points include that Flink allows for stateful computations over event streams in an expressive, scalable, fault-tolerant way through layered APIs. It also supports batch processing, machine learning, and serving as a stream processor that unifies streaming and batch. The document highlights many use cases of Flink at Alibaba and how it powers critical systems like real-time analytics and recommendations.
GoldenGate and ODI - A Perfect Match for Real-Time Data WarehousingMichael Rainey
Oracle Data Integrator and Oracle GoldenGate excel as standalone products, but paired together they are the perfect match for real-time data warehousing. Following Oracle’s Next Generation Reference Data Warehouse Architecture, this discussion will provide best practices on how to configure, implement, and process data in real-time using ODI and GoldenGate. Attendees will see common real-time challenges solved, including parent-child relationships within micro-batch ETL.
Presented at RMOUG Training Days 2013 & KScope13.
Solution Manager 7.2 SAP Monitoring - Part 2 - ConfigurationLinh Nguyen
SAP Solution Manager 7.2 SAP Monitoring
Part 2 of 4 (Configuration)
Visit https://www.itconductor.com/blog for latest updates
The document provides information about SAP HANA, including what it is, its architecture, and development scenarios. SAP HANA is an in-memory database that can be deployed on-premise or in the cloud. It allows for real-time analysis of large data volumes. The architecture includes components like the index server, XS runtime, and name server. Development in SAP HANA involves using calculation views to define slices of data and Studio as an development environment. Time dimensions and graphical views can also be generated.
SSL certificates in the Oracle Database without surprisesNelson Calero
Presentation delivered on UKOUG conference in December 2019.
Abstract: Nowadays database installations are required to use secure connections to communicate with clients, from connecting to the database listener to interact with external services (for example to send emails from the database).
Also since a couple of years ago, it has been required to use stronger protocols like TLS 1.2 (SHA2 algorithm), which requires extra configuration in older database releases.
This presentation shows how SSL certificates work from a DBA perspective, which tools are available and examples of configuring and troubleshooting their usage from the Oracle database. It also explores the implications and how to implement TLS 1.2 and common errors found in real life usage.
Understanding Data Consistency in Apache CassandraDataStax
This document provides an overview of data consistency in Apache Cassandra. It discusses how Cassandra writes data to commit logs and memtables before flushing to SSTables. It also reviews the CAP theorem and how Cassandra offers tunable consistency levels for both reads and writes. Strategies for choosing consistency levels for writes, such as ANY, ONE, QUORUM, and ALL are presented. The document also covers read repair and hinted handoffs in Cassandra. Examples of CQL queries with different consistency levels are given and information on where to download Cassandra is provided at the end.
BW Migration to HANA Part 2 - SUM DMO Tool for SAP Upgrade & MigrationLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This first part focuses on the SUM DMO tool used for the migration, pre-requisites, optimization and the actual migration steps
By OZSoft Consulting for ITConductor.com
Author: Terry Kempis
Editor: Linh Nguyen
BW Migration to HANA Part 3 - Post-processing on the Migrated SystemLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This part focuses on post-processing, which includes standard tasks after upgrade and HANA-specific post-tasks.
This presentation to give a brief overview of various technology features that constitue SAP In-memory Database HANA. More details about how SAP HANA achieves compression, insert only on delta, columniar/row stores, Data provisioning tools in to HANA appliance. this is more suited for technology teams who will manage SAP HANA appliance.
This document discusses SAP HANA system replication which can be automated using the SUSE High Availability Solution. It provides an overview of SAP HANA scenarios for high availability and disaster recovery. It also summarizes the steps to install and configure SAP HANA system replication using the SUSE clustering and automation tools. New use cases that will be supported in upcoming versions are also presented such as single-tier replication with additional non-production systems and multi-tier cascading replication configurations.
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
This document provides an overview of Terraform and infrastructure as code using Terraform. It discusses what Terraform is, how to get started with Terraform including initializing a Terraform configuration, planning and applying changes, variables, modules, providers and resources. It also covers Terraform state and locking state for multi-user collaboration.
Mastering SAP Monitoring - SAP SLT & RFC Connection MonitoringLinh Nguyen
This document discusses monitoring SAP SLT (system replication) and RFC (remote function call) connections. It describes the limited monitoring capabilities available within SAP, including LTR, LTRC, LTRO and Solution Manager. It then introduces two alternative monitoring solutions - the OZSoft SAP Management Pack for Microsoft SCOM, and IT-Conductor's cloud-based application performance management. Both solutions allow for centralized, automated monitoring of SLT replication tables and RFC destinations without requiring SAP-specific software.
Simplifying Change Data Capture using Databricks DeltaDatabricks
In this talk, we will present recent enhancements to the techniques previously discussed in this blog: https://databricks.com/blog/2018/10/29/simplifying-change-data-capture-with-databricks-delta.html. We will start by discussing the different CDC architectures that can be deployed in concert with Databricks Delta. We will then use notebooks to demonstrate updated CDC SQL and look at performance tuning considerations for both batch as well as streaming CDC pipelines into Delta.
Introduction to extracting data from sap s 4 hana with abap cds viewsLuc Vanrobays
The document provides guidance on extracting data from SAP S/4HANA to SAP BW/4HANA using CDS views. It discusses that SAP has developed a communication scenario requiring configuration in both S/4HANA and BW. Extracts can be done in full or delta mode, with hierarchies supported in full mode. CDS views containing the extractor annotation can be found using the view browser in S/4HANA.
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
Oracle Database 12c introduces several new features including pluggable databases (PDB) that allow multiple isolated databases to be consolidated within a single container database (CDB). It also introduces new administrative privileges (SYSBACKUP, SYSDG, SYSKM) and features such as transparent data encryption, invisible columns, object tables, and enhancements to RMAN and SQL.
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...Databricks
Nowadays, people are creating, sharing and storing data at a faster pace than ever before, effective data compression / decompression could significantly reduce the cost of data usage. Apache Spark is a general distributed computing engine for big data analytics, and it has large amount of data storing and shuffling across cluster in runtime, the data compression/decompression codecs can impact the end to end application performance in many ways.
However, there’s a trade-off between the storage size and compression/decompression throughput (CPU computation). Balancing the data compress speed and ratio is a very interesting topic, particularly while both software algorithms and the CPU instruction set keep evolving. Apache Spark provides a very flexible compression codecs interface with default implementations like GZip, Snappy, LZ4, ZSTD etc. and Intel Big Data Technologies team also implemented more codecs based on latest Intel platform like ISA-L(igzip), LZ4-IPP, Zlib-IPP and ZSTD for Apache Spark; in this session, we’d like to compare the characteristics of those algorithms and implementations, by running different micro workloads as well as end to end workloads, based on different generations of Intel x86 platform and disk.
It’s supposedly to be the best practice for big data software engineers to choose the proper compression/decompression codecs for their applications, and we also will present the methodologies of measuring and tuning the performance bottlenecks for typical Apache Spark workloads.
Apache Flink 101 - the rise of stream processing and beyondBowen Li
This document provides an overview and summary of Apache Flink. It discusses how Flink enables stateful stream processing and beyond. Key points include that Flink allows for stateful computations over event streams in an expressive, scalable, fault-tolerant way through layered APIs. It also supports batch processing, machine learning, and serving as a stream processor that unifies streaming and batch. The document highlights many use cases of Flink at Alibaba and how it powers critical systems like real-time analytics and recommendations.
GoldenGate and ODI - A Perfect Match for Real-Time Data WarehousingMichael Rainey
Oracle Data Integrator and Oracle GoldenGate excel as standalone products, but paired together they are the perfect match for real-time data warehousing. Following Oracle’s Next Generation Reference Data Warehouse Architecture, this discussion will provide best practices on how to configure, implement, and process data in real-time using ODI and GoldenGate. Attendees will see common real-time challenges solved, including parent-child relationships within micro-batch ETL.
Presented at RMOUG Training Days 2013 & KScope13.
Solution Manager 7.2 SAP Monitoring - Part 2 - ConfigurationLinh Nguyen
SAP Solution Manager 7.2 SAP Monitoring
Part 2 of 4 (Configuration)
Visit https://www.itconductor.com/blog for latest updates
The document provides information about SAP HANA, including what it is, its architecture, and development scenarios. SAP HANA is an in-memory database that can be deployed on-premise or in the cloud. It allows for real-time analysis of large data volumes. The architecture includes components like the index server, XS runtime, and name server. Development in SAP HANA involves using calculation views to define slices of data and Studio as an development environment. Time dimensions and graphical views can also be generated.
SSL certificates in the Oracle Database without surprisesNelson Calero
Presentation delivered on UKOUG conference in December 2019.
Abstract: Nowadays database installations are required to use secure connections to communicate with clients, from connecting to the database listener to interact with external services (for example to send emails from the database).
Also since a couple of years ago, it has been required to use stronger protocols like TLS 1.2 (SHA2 algorithm), which requires extra configuration in older database releases.
This presentation shows how SSL certificates work from a DBA perspective, which tools are available and examples of configuring and troubleshooting their usage from the Oracle database. It also explores the implications and how to implement TLS 1.2 and common errors found in real life usage.
Understanding Data Consistency in Apache CassandraDataStax
This document provides an overview of data consistency in Apache Cassandra. It discusses how Cassandra writes data to commit logs and memtables before flushing to SSTables. It also reviews the CAP theorem and how Cassandra offers tunable consistency levels for both reads and writes. Strategies for choosing consistency levels for writes, such as ANY, ONE, QUORUM, and ALL are presented. The document also covers read repair and hinted handoffs in Cassandra. Examples of CQL queries with different consistency levels are given and information on where to download Cassandra is provided at the end.
BW Migration to HANA Part 2 - SUM DMO Tool for SAP Upgrade & MigrationLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This first part focuses on the SUM DMO tool used for the migration, pre-requisites, optimization and the actual migration steps
By OZSoft Consulting for ITConductor.com
Author: Terry Kempis
Editor: Linh Nguyen
BW Migration to HANA Part 3 - Post-processing on the Migrated SystemLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This part focuses on post-processing, which includes standard tasks after upgrade and HANA-specific post-tasks.
BW Migration to HANA Part1 - Preparation in BW SystemLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This first part focuses on the preparation tasks on the BW system.
By OZSoft Consulting for ITConductor.com
Author: Terry Kempis
Editor: Linh Nguyen
SAP HANA SPS12 Upgrade and Exploring New Features - Part 1Linh Nguyen
SAP HANA SPS12 Update / Upgrade process and what's new in this release - Part 1 covers the update procedure while Part 2 will cover the new/changed features
Managing and Monitoring HANA 2 active:active with System ReplicationLinh Nguyen
Exploring new feature of HANA 2’s system replication -the active:active read-enabled, allowing read-only queries on secondary system’s tables, using the new operation mode ‘logreplay_readaccess’.
Note that logreplay_readaccess does not support Dynamic Tiering
The IT-Conductor monitors both the primary and secondary system.
A How-to practical example of installing SAP HANA Dynamic Tiering, creating extended storage, provisioning to a tenant database and creating table using extended storage
This document provides guidance on installing and updating SAP HANA systems. It discusses SAP HANA system concepts, hardware and software requirements, and the installation process. The installation can be done interactively or automated using parameters and configuration files. Updates can be applied locally or remotely to the SAP HANA system and its individual components. The document also includes a reference for installation and update parameters.
Mastering SAP Monitoring - Workload MonitoringLinh Nguyen
This document discusses workload monitoring for on-premises SAP systems without SAP CCMS or Solution Manager. It defines workload as units of work like transactions, jobs, and processes. It explains why monitoring workloads is important for performance, capacity planning, and trend analysis. The document reviews limited workload monitoring options in SAP and advocates an approach that monitors transactions, users, locations, and timestamps across systems. It also discusses service-oriented workload monitoring and announces upcoming webinars on topics like batch jobs and BusinessObjects monitoring.
10 Ways to Better Application-Centric Service ManagementLinh Nguyen
Many IT organizations suffer from the nagging problems of Availability and Performance Management. In this presentation we will detail 10 Ways to Better Application-Centric Service Management, particularly with SAP environments.
SAP Solution Manager - Netweaver on HANA Monitoring Setup Part 1 of 3 (Prepar...Linh Nguyen
This document provides instructions for setting up technical monitoring of SAP Netweaver on HANA in Solution Manager. The steps include:
1. Creating a monitoring user in the HANA database and assigning it the necessary roles.
2. Configuring the HANA client, registering the HANA system in the SLD, and testing the web services.
3. Installing the HANA Lifecycle Manager and registering the HANA database in the SLD and verifying it is visible.
Solution Manager Technical Monitoring - SAP NW BW on HANA Setup Part 2 of 3 (...Linh Nguyen
In Netweaver on HANA Monitoring Setup Part 1 of 3 (Preparation) steps were performed to allow the Solution Manager and the Managed System to communicate with each other. As a recap, the landscape consists of Solution Manager 7.1 that was previously upgraded to SPS12 including Landscape Management DB, on Windows/MSSQLServer. The systems to be managed includes SAP BW 7.31 (HNA) on HANA database SPS07 (HDB), both running on SUSE Linux 11.3.
In this Part 2, the majority of the work is to configure the managed systems with these high-level steps:
1). Assign Product
2). Check Prerequisites
3). Maintain RFCs (for ABAP managed system)
4). Assign Diagnostics Agent
5). Enter System Parameters
6). Enter Landscape Parameters
7). Maintain Users (for monitoring ABAP managed system)
8). Finalize Configuration
9). Check Configuration
10). Complete Configuration
Solution Manager - SAP NW BW on HANA Setup Part 3 of 3 (Technical Monitoring ...Linh Nguyen
This document provides instructions for configuring technical monitoring in SAP Solution Manager, including defining the monitoring scope, assigning monitoring templates to managed systems, specifying which metrics will be collected, and checking the monitoring configuration and sample alerts. It emphasizes that setting up Solution Manager monitoring is a multi-step project involving preparing the infrastructure, configuring managed systems, and defining the monitoring scope and reporting.
Mastering SAP Monitoring - Determining the Health of your SAP EnvironmentLinh Nguyen
Part 2 of Mastering SAP Monitoring series http://www.itconductor.com/blog/mastering-sap-monitoring-without-sap-ccms-or-solman takes a closer look at Service management's core component: Availability, Performance, Alerts and how together with Analytics can automate Service Health Checks.
We will explain these topics in detail with regards to SAP and the 10 principles of Application-Centric Service Management & Automation
Benefits include:
1) 360-degree view of Application Environment
2) Dynamic Service Level Management
3) Service Impact Awareness
4) Subscription-based Management by Exceptions
Audience: SAP Basis Administrator, SAP DBA, IT operations and managers of SAP ecosystems.
Solution Manager Technical Monitoring - SAP BusinessObjects Business Intelligence platform 4.0
Monitoring Setup
Part 2 of 3
Auto-Configure using AC tool
By OZSoft Consulting for ITConductor.com
Solution Manager Technical Monitoring - SAP BOBJ BI 4.0 (Part 3 of 3 - Manage...Linh Nguyen
This document provides instructions for configuring technical monitoring of SAP BusinessObjects Business Intelligence platform 4.0 and related systems. It outlines 8 steps to set up managed systems and assign diagnostics agents. These include defining product assignments, checking prerequisites, entering system parameters, and finalizing the configuration. An additional 7 steps are provided for setting up monitoring, defining the monitoring scope, and configuring reporting. The overall process configures monitoring, alerting and reporting for SAP BusinessObjects.
Mastering SAP Monitoring - SAP HANA Monitoring, Management & AutomationLinh Nguyen
Part 7 of Mastering SAP Monitoring series http://www.itconductor.com/blog/mastering-sap-monitoring-without-sap-ccms-or-solman explains SAP HANA monitoring and management challenges and solutions.
HANA use cases have grown rapidly from BW to Suite on HANA to S/4HANA, along with myriads of choices for platforms such as on-premise, HANA Cloud Platform, HANA Enterprise Cloud, Public Cloud like AWS, and Private clouds such as VirtualStream, etc. No matter what scenario or platform, one thing is certain - it has to be monitored and managed to ensure the best possible performance, availability and ROI. Run Simple with SAP may mean simple for users, however for Basis and IT Operations we also need tools to help simplify the life cycle management aspects.
We will explain these topics in detail with regards to SAP and the 10 principles of Application-Centric Service Management & Automation
Benefits:
1) Look at an updated list of tools available from SAP and other solutions
2) Focus on availability, performance, alerts management
Proactive health checks
3) Automation of common housekeeping tasks
Trend analysis
Audience: SAP Basis Administrator, SAP DBA, HANA Admin, IT operations and managers of environments.
Mermaid Telecom is a telecom infrastructure company founded in 2012 by industry veterans with over 30 years of experience. They offer innovative tower, mast, and rooftop structure designs along with engineering, procurement, construction management services. Their signature designs include a tubular tower, 60-degree angular tower, and step monopole that can be transported and assembled efficiently. Mermaid Telecom has successfully delivered numerous projects across Asia and aims to provide competitive solutions through their expertise and simplified designs.
Salt and Ansible are both popular tools for network orchestration and automation. Salt uses a centralized master-minion architecture where configuration files and templates are stored on the master. Ansible uses an agentless architecture where playbooks are run directly on managed nodes. Both tools support network device configuration through modules like NAPALM that provide a common API. Salt states and Ansible playbooks define the desired configuration through templates that are rendered and installed on devices. The tools detect configuration drift and make necessary changes.
The document discusses configuring replication between two Oracle RAC clusters, Silo A and Silo B, located in Utah and New Jersey respectively, using SharePlex. It outlines setting up SharePlex, including: creating required tablespaces and TNS entries; installing and configuring SharePlex on each cluster; creating SharePlex configuration files to replicate data between the clusters bidirectionally; and starting the SharePlex processes. The goal is to maintain high availability and meet a 99.9% uptime SLA for critical application data replicated between the clusters.
Configuring and Monitoring HANA Extension NodeLinh Nguyen
Continuing the warm data management exploration, we implemented and tested another feature – the HANA Extension Node, which exists in scale-out landscape as a slave node, for use in either Native HANA or in BW scenarios
Oracle 11g Installation With ASM and Data Guard SetupArun Sharma
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM.
We will be following below steps for our configuration:
Setup Primary Server
Setup Standby Server
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-installation-with-asm-and-data-guard-setup
Kafka High Availability in multi data center setup with floating Observers wi...HostedbyConfluent
The document discusses how to set up Kafka high availability in a multi-datacenter configuration using floating observers. Observers are regular brokers that are never part of the in-sync replica set for a topic. This allows forced replication to another data center for high availability. The document explains how to create topics that use observers, what happens to existing topics when observers are introduced, and how to distribute load evenly between data centers by preparing observers to float and using rack awareness.
How To Setup Highly Available Web Servers with Keepalived & Floating IPs on U...VEXXHOST Private Cloud
In this guide, we will show you to use keepalived to set up a highly available web service on Ubuntu 16.04 by using a floating IP address that can be moved between two capable web servers. The keepalived daemon can be used to monitor services or systems and to automatically failover to a standby if their’s any problems occur. If the primary server goes down, the floating IP will be moved to the second server automatically, allowing service to resume by the help of floating IP that we are gonna use in this tutorial.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
PostgreSQL is a free and open-source relational database management system that provides high performance and reliability. It supports replication through various methods including log-based asynchronous master-slave replication, which the presenter recommends as a first option. The upcoming PostgreSQL 9.4 release includes improvements to replication such as logical decoding and replication slots. Future releases may add features like logical replication consumers and SQL MERGE statements. The presenter took questions at the end and provided additional resources on PostgreSQL replication.
The document discusses configuring Hadoop on a cluster. It recommends setting up the cluster with one master node hosting the naming node and job tracker, and two slave nodes hosting data nodes and task trackers. It describes configuring the server names by editing the masters and slaves files in the Hadoop configuration directory to specify the hostnames of the master and slave nodes.
[Altibase] 9 replication part2 (methods and controls)altistory
The document discusses replication in ALTIBASE HDB. It describes the query processor and storage manager roles in handling SQL statements and data. It then summarizes 6 methods for replicating data between servers and explains that method 5, which converts redo logs to a replayable logical form and sends them, has good replication performance with some conversion expense. The document also provides details on replication objects, conditions for replication tables, and commands for creating, controlling, and cloning replication objects and tables.
Ansible is an automation platform that allows users to configure, deploy, and manage applications on servers. It combines multi-node software deployment, configuration management, and task execution. Ansible works by provisioning machines using SSH and executing commands via modules. Playbooks allow users to automate complex deployment workflows through YAML scripts. Roles in Ansible allow for reusable and modular components.
Setting up a local development environment is an integral part of the start of any web-project.
In the report, I will share with you the challenges our team encountered during the existence of the project and the ways in which they are solved.
We will go from local installation to the workstation through VirtualBox, Vagrant + Chef and Docker-compose.
Join, it will be interesting!
Presentation given by Sid at Wise TechTalks
The document provides instructions for installing and configuring the Eyeball XMPP Server, which is a scalable instant messaging server that supports client-to-client, client-to-server, and server-to-server communication. It describes installing prerequisites like databases and ODBC drivers, configuring the edge and state server components, creating database schemas and users, enabling TLS encryption, and configuring licensing and server-to-server communication.
The document provides instructions for installing and configuring a Samba file sharing server on CentOS 7. It describes installing Samba packages, creating a shared directory, configuring user access, editing configuration files, starting services, adding firewall rules, and connecting from Windows and Linux clients. The key steps are installing Samba packages, configuring shares and permissions in smb.conf, starting smb and nmb services, enabling them to start on boot, and connecting clients to the shared directory.
Configuring MongoDB HA Replica Set on AWS EC2ShepHertz
It has always been a tedious task to choose the right configuration for MongoDB on AWS EC2
It is always challenging and takes a lots of time to make your system Production Ready.
Here is a quick guide on how to setup MongoDB on AWS EC2.
This document provides an introduction and overview of Ansible, an open-source automation tool. It discusses how Ansible uses an agentless architecture with YAML files to automate configuration management and deployment tasks across multiple servers. The document also outlines key Ansible concepts like inventory files, modules, playbooks and components that make up playbooks like tasks, handlers, templates and roles.
This document provides instructions for configuring Hadoop, HBase, and HBase client on a single node system. It includes steps for installing Java, adding a dedicated Hadoop user, configuring SSH, disabling IPv6, installing and configuring Hadoop, formatting HDFS, starting the Hadoop processes, running example MapReduce jobs to test the installation, and configuring HBase.
PDF Reader Pro Crack Latest Version FREE Download 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
PDF Reader Pro is a software application, often referred to as an AI-powered PDF editor and converter, designed for viewing, editing, annotating, and managing PDF files. It supports various PDF functionalities like merging, splitting, converting, and protecting PDFs. Additionally, it can handle tasks such as creating fillable forms, adding digital signatures, and performing optical character recognition (OCR).
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)Andre Hora
Exceptions allow developers to handle error cases expected to occur infrequently. Ideally, good test suites should test both normal and exceptional behaviors to catch more bugs and avoid regressions. While current research analyzes exceptions that propagate to tests, it does not explore other exceptions that do not reach the tests. In this paper, we provide an empirical study to explore how frequently exceptional behaviors are tested in real-world systems. We consider both exceptions that propagate to tests and the ones that do not reach the tests. For this purpose, we run an instrumented version of test suites, monitor their execution, and collect information about the exceptions raised at runtime. We analyze the test suites of 25 Python systems, covering 5,372 executed methods, 17.9M calls, and 1.4M raised exceptions. We find that 21.4% of the executed methods do raise exceptions at runtime. In methods that raise exceptions, on the median, 1 in 10 calls exercise exceptional behaviors. Close to 80% of the methods that raise exceptions do so infrequently, but about 20% raise exceptions more frequently. Finally, we provide implications for researchers and practitioners. We suggest developing novel tools to support exercising exceptional behaviors and refactoring expensive try/except blocks. We also call attention to the fact that exception-raising behaviors are not necessarily “abnormal” or rare.
Who Watches the Watchmen (SciFiDevCon 2025)Allon Mureinik
Tests, especially unit tests, are the developers’ superheroes. They allow us to mess around with our code and keep us safe.
We often trust them with the safety of our codebase, but how do we know that we should? How do we know that this trust is well-deserved?
Enter mutation testing – by intentionally injecting harmful mutations into our code and seeing if they are caught by the tests, we can evaluate the quality of the safety net they provide. By watching the watchmen, we can make sure our tests really protect us, and we aren’t just green-washing our IDEs to a false sense of security.
Talk from SciFiDevCon 2025
https://www.scifidevcon.com/courses/2025-scifidevcon/contents/680efa43ae4f5
WinRAR Crack for Windows (100% Working 2025)sh607827
copy and past on google ➤ ➤➤ https://hdlicense.org/ddl/
WinRAR Crack Free Download is a powerful archive manager that provides full support for RAR and ZIP archives and decompresses CAB, ARJ, LZH, TAR, GZ, ACE, UUE, .
Adobe After Effects Crack FREE FRESH version 2025kashifyounis067
🌍📱👉COPY LINK & PASTE ON GOOGLE http://drfiles.net/ 👈🌍
Adobe After Effects is a software application used for creating motion graphics, special effects, and video compositing. It's widely used in TV and film post-production, as well as for creating visuals for online content, presentations, and more. While it can be used to create basic animations and designs, its primary strength lies in adding visual effects and motion to videos and graphics after they have been edited.
Here's a more detailed breakdown:
Motion Graphics:
.
After Effects is powerful for creating animated titles, transitions, and other visual elements to enhance the look of videos and presentations.
Visual Effects:
.
It's used extensively in film and television for creating special effects like green screen compositing, object manipulation, and other visual enhancements.
Video Compositing:
.
After Effects allows users to combine multiple video clips, images, and graphics to create a final, cohesive visual.
Animation:
.
It uses keyframes to create smooth, animated sequences, allowing for precise control over the movement and appearance of objects.
Integration with Adobe Creative Cloud:
.
After Effects is part of the Adobe Creative Cloud, a suite of software that includes other popular applications like Photoshop and Premiere Pro.
Post-Production Tool:
.
After Effects is primarily used in the post-production phase, meaning it's used to enhance the visuals after the initial editing of footage has been completed.
This presentation explores code comprehension challenges in scientific programming based on a survey of 57 research scientists. It reveals that 57.9% of scientists have no formal training in writing readable code. Key findings highlight a "documentation paradox" where documentation is both the most common readability practice and the biggest challenge scientists face. The study identifies critical issues with naming conventions and code organization, noting that 100% of scientists agree readable code is essential for reproducible research. The research concludes with four key recommendations: expanding programming education for scientists, conducting targeted research on scientific code quality, developing specialized tools, and establishing clearer documentation guidelines for scientific software.
Presented at: The 33rd International Conference on Program Comprehension (ICPC '25)
Date of Conference: April 2025
Conference Location: Ottawa, Ontario, Canada
Preprint: https://arxiv.org/abs/2501.10037
AgentExchange is Salesforce’s latest innovation, expanding upon the foundation of AppExchange by offering a centralized marketplace for AI-powered digital labor. Designed for Agentblazers, developers, and Salesforce admins, this platform enables the rapid development and deployment of AI agents across industries.
Email: [email protected]
Phone: +1(630) 349 2411
Website: https://www.fexle.com/blogs/agentexchange-an-ultimate-guide-for-salesforce-consultants-businesses/?utm_source=slideshare&utm_medium=pptNg
Scaling GraphRAG: Efficient Knowledge Retrieval for Enterprise AIdanshalev
If we were building a GenAI stack today, we'd start with one question: Can your retrieval system handle multi-hop logic?
Trick question, b/c most can’t. They treat retrieval as nearest-neighbor search.
Today, we discussed scaling #GraphRAG at AWS DevOps Day, and the takeaway is clear: VectorRAG is naive, lacks domain awareness, and can’t handle full dataset retrieval.
GraphRAG builds a knowledge graph from source documents, allowing for a deeper understanding of the data + higher accuracy.
Join Ajay Sarpal and Miray Vu to learn about key Marketo Engage enhancements. Discover improved in-app Salesforce CRM connector statistics for easy monitoring of sync health and throughput. Explore new Salesforce CRM Synch Dashboards providing up-to-date insights into weekly activity usage, thresholds, and limits with drill-down capabilities. Learn about proactive notifications for both Salesforce CRM sync and product usage overages. Get an update on improved Salesforce CRM synch scale and reliability coming in Q2 2025.
Key Takeaways:
Improved Salesforce CRM User Experience: Learn how self-service visibility enhances satisfaction.
Utilize Salesforce CRM Synch Dashboards: Explore real-time weekly activity data.
Monitor Performance Against Limits: See threshold limits for each product level.
Get Usage Over-Limit Alerts: Receive notifications for exceeding thresholds.
Learn About Improved Salesforce CRM Scale: Understand upcoming cloud-based incremental sync.
Explaining GitHub Actions Failures with Large Language Models Challenges, In...ssuserb14185
GitHub Actions (GA) has become the de facto tool that developers use to automate software workflows, seamlessly building, testing, and deploying code. Yet when GA fails, it disrupts development, causing delays and driving up costs. Diagnosing failures becomes especially challenging because error logs are often long, complex and unstructured. Given these difficulties, this study explores the potential of large language models (LLMs) to generate correct, clear, concise, and actionable contextual descriptions (or summaries) for GA failures, focusing on developers’ perceptions of their feasibility and usefulness. Our results show that over 80% of developers rated LLM explanations positively in terms of correctness for simpler/small logs. Overall, our findings suggest that LLMs can feasibly assist developers in understanding common GA errors, thus, potentially reducing manual analysis. However, we also found that improved reasoning abilities are needed to support more complex CI/CD scenarios. For instance, less experienced developers tend to be more positive on the described context, while seasoned developers prefer concise summaries. Overall, our work offers key insights for researchers enhancing LLM reasoning, particularly in adapting explanations to user expertise.
https://arxiv.org/abs/2501.16495
Meet the Agents: How AI Is Learning to Think, Plan, and CollaborateMaxim Salnikov
Imagine if apps could think, plan, and team up like humans. Welcome to the world of AI agents and agentic user interfaces (UI)! In this session, we'll explore how AI agents make decisions, collaborate with each other, and create more natural and powerful experiences for users.
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...Eric D. Schabell
It's time you stopped letting your telemetry data pressure your budgets and get in the way of solving issues with agility! No more I say! Take back control of your telemetry data as we guide you through the open source project Fluent Bit. Learn how to manage your telemetry data from source to destination using the pipeline phases covering collection, parsing, aggregation, transformation, and forwarding from any source to any destination. Buckle up for a fun ride as you learn by exploring how telemetry pipelines work, how to set up your first pipeline, and exploring several common use cases that Fluent Bit helps solve. All this backed by a self-paced, hands-on workshop that attendees can pursue at home after this session (https://o11y-workshops.gitlab.io/workshop-fluentbit).
Microsoft AI Nonprofit Use Cases and Live Demo_2025.04.30.pdfTechSoup
In this webinar we will dive into the essentials of generative AI, address key AI concerns, and demonstrate how nonprofits can benefit from using Microsoft’s AI assistant, Copilot, to achieve their goals.
This event series to help nonprofits obtain Copilot skills is made possible by generous support from Microsoft.
What You’ll Learn in Part 2:
Explore real-world nonprofit use cases and success stories.
Participate in live demonstrations and a hands-on activity to see how you can use Microsoft 365 Copilot in your own work!
Douwan Crack 2025 new verson+ License codeaneelaramzan63
Copy & Paste On Google >>> https://dr-up-community.info/
Douwan Preactivated Crack Douwan Crack Free Download. Douwan is a comprehensive software solution designed for data management and analysis.
Exploring Wayland: A Modern Display Server for the FutureICS
Wayland is revolutionizing the way we interact with graphical interfaces, offering a modern alternative to the X Window System. In this webinar, we’ll delve into the architecture and benefits of Wayland, including its streamlined design, enhanced performance, and improved security features.
Avast Premium Security Crack FREE Latest Version 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
Avast Premium Security is a paid subscription service that provides comprehensive online security and privacy protection for multiple devices. It includes features like antivirus, firewall, ransomware protection, and website scanning, all designed to safeguard against a wide range of online threats, according to Avast.
Key features of Avast Premium Security:
Antivirus: Protects against viruses, malware, and other malicious software, according to Avast.
Firewall: Controls network traffic and blocks unauthorized access to your devices, as noted by All About Cookies.
Ransomware protection: Helps prevent ransomware attacks, which can encrypt your files and hold them hostage.
Website scanning: Checks websites for malicious content before you visit them, according to Avast.
Email Guardian: Scans your emails for suspicious attachments and phishing attempts.
Multi-device protection: Covers up to 10 devices, including Windows, Mac, Android, and iOS, as stated by 2GO Software.
Privacy features: Helps protect your personal data and online privacy.
In essence, Avast Premium Security provides a robust suite of tools to keep your devices and online activity safe and secure, according to Avast.
Avast Premium Security Crack FREE Latest Version 2025mu394968
SAP HANA Distributed System Scaleout and HA
1. SAP HANA Distributed System
Scale out and HA
This is a compilation of notes from installation/configuration of SAP HANA
multiple-host system, including automatic failover testing, and monitoring.
By OZSoft Consulting for ITConductor.com
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 1
2. Introduction
Overview of system architecture
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 2
• HANA scaleout requires data persistence layer to be on either shared storage
(such as NFS, NAS), SAN with clustered filesystem, or non-shared storage using
HAN storage connector API
In our scenario OZHANANFS – prerequisite shared file system (NFS) to contain
• Installation path (sapmnt) /hana/shared
• Data volume /hana/data/<SID>
• Log volume /hana/log/<SID>
3. References
Reference:
• SAP HANA Administration Guide
• SAP HANA Server Installation and Update Guide - SAP Help
Portal
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 3
4. Installation - overview
In this document, there are two installation steps
1. Install the server database in the master host (ozhanaitc) and at
the same time add host ozhdbnode2
2. Add another host ozhdbnode3
Some of the prerequisites
- All hosts in a multi-host system must have the same sapsys group
ID
- Although not strictly required, for convenience all hosts should
have the same root password, or can be overwritten during
installation. Installation is handled by hdblcm with root privileges
- Shared mount /hana accessible to all hosts
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 4
5. Master host + additional host Installation (1)
The installation of a SAP HANA multiple-host system uses the same hdblcm tool
used in installing single-host system, with additional prompts
• To add another host and its role
• Certificate to use in the additional host, for internal communications (between
hosts as well as between processes in a single-host)
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 5
6. Master host + additional host Installation (2)
After the installation, the two hosts and all the services will be
visible in the HANA studio. The ‘IGNORE’ in Host Status
column represents the STANDBY host
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 6
7. Adding a host
Reference: Add Hosts Using the Command-Line Interface
Command line to add worker host ozhdbnode3 to master host ozhanaitc
Display – hosts after the addition in HANA Studio
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 7
cd /hana/shared/HDB/HDB00/hdblcm
./hdlcm --addhosts=ozhdbnode3:role=worker
--certificates_hostmap=ozhdbnode3=ozhanaitc
--root_password=XXXXXXX
--remote_execution=saphostagent
--use_http=yes
8. Volume mount points after installation
After the installation, only the volumes for the SYSTEMDB will be created, e.g.
/hana/data/HDB/mnt00001/hdb00001
/hana/log/HDB/mnt00001/hdb00001
The digit at the end of mnt00001 refers to the host sequence.
mnt00001 is for the master host (in this example, ozhanaitc)
mnt00002 is for the next host that will have data
Only after creating a tenant database which implicitly contains an indexserver that
corresponding volumes will be created.
For example
‘create database 4H at location ozhanaitc system user password xxxxx’
This will be located still in /mnt00001 (since its on the same host with ozhanaitc)
‘create database ND3 at location ozhdbnode3 system user password yyy’
This will create /mnt00002
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 8
9. Redistribution of tables in Distributed System
A feature available with a HANA distributed system is ‘table distribution’:
• In a distributed system, tables and table partitions are assigned to an index
server on a particular host at the time of their creation, but this assignment can
be changed – the process known as “redistribution operations”
• Reference: 2081591 - FAQ: SAP HANA Table Distribution
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 9
10. Redistribution (2)
Use redistribution for the following needs:
• Redistribute data before removing a host from a system
• Redistribute data after adding a new host to the system, it can be done by
adding an index server to a tenant DB in another host.
• Optimize current table distribution
• Optimize table partitioning, commonly for SAP BW usage scenario
For example, the following tenants exist
• Tenant DB A4H in ozhanaitc
• Tenant DB ND3 in ozhdbnode3
• Alter database A4H add ‘indexserver’ at location ‘ozhdbnode3:30049’
• Alter database ND3 add ‘indexserver’ at location ‘ozhanaitc:30061’
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 10
OZHANAITC
Tenant DB A4H
OZHDBNODE3
Tenant DB ND3
11. Redistribution (3)
Table/view(s) related to table distribution
• REORG_PLAN – This view contains the last table redistribution plan
generated with this database connection. The contents of the session is
temporarily stored and will be deleted when the connection is closed.
• REORG_STEPS This view contains the executed or to be executed table
redistribution plan items
To easily display any possible ‘movement’,
Select * from reorg_steps
Where new_host is not null;
To display tables where run-time host_name is different from the original host
location of the tenant DB,
Select * from m_cs_tables
Where schema_name = ‘<schema_name>’
And host <> ‘<orginal_host>’
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 11
12. HA Failover – Testing (1)
Testing automatic failover to standby, when a worker host goes down
1. Normal/original configured roles
2. STOP worker ozhdbnode3
• Standby ozhdbnode2 will have ‘HOST_STATUS=PARTIAL’
• WORKER ozhdbnode3 will have ‘HOST_STATUS=NO’, and
‘FAILOVER_STATUS=FAILOVER to OZHDBNODE2’
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 12
13. HA Failover – Testing (2)
3. After the takeover is complete
• ozhdbnode2 is now a ‘WORKER’ host
• ozhdbnode3 is now the ‘STANDBY’ host
• the host roles are now switched (i.e. host_config_roles <> host_actual_roles)
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 13
ozhdbnode3
ozhdbnode2
14. HA Failover – Testing (3)
4. Restarting ozhdbnode3 (originally configured as SLAVE)
• Both ozhdbnode2 and ozhdbnode2 will have
HOST_STATUS=INFO
• By design, HANA will not automatically fail back to the
original configured worker system – even assuming it’s fixed
and restarted, therefore after the first failover, its’ Actual
role will be different than the Configured role
• There is no ‘automatic’ takeover, the status of ozhdbnode3
sysem will remain as STANDBY
• In order to restore the actual roles=configured roles, a
manual process must be invoked to stop ozhdbnode2, or
restart the entire HANA cluster
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 14
15. HA Failover – Testing (4)
5. Now stop ozhdbnode2 so ozhdbnode3 can takeover its
original role (i.e. SLAVE Worker)
a) While stopping of ozhdbnode2 system is in progress, ozhdbnode2 status will be
‘STOPPING/WARNING’
b) ozhdbnode3 will have status ‘starting/warning’, indicating it is taking over
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 15
16. HA Failover – Testing (5)
6. When ozhdbnode3 has fully taken over, its
host_actual_role will revert to its original role (SLAVE)
Finally, the host roles are now the same as it was originally configured
Note that while ozhdbnode2 is IGNORED, the OVERVIEW status in HANA
Studio will be red, since the services of ozhdbnode2 are stopped.
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 16
17. Monitoring/Reporting (1)
SQL queries that can be run from SYSTEMDB to monitor multiple-host systems.
Display any inactive services, including tenant DB services
Select *
From “SYS_DATABASES”.”M.SERVICES”
Where active_status = ‘NO’
List host that is not active or where the configured role is not the same as its active role
(i.e. after a failover has occurred)
Select *
from m_landscape_host_configuration
where
(
(host_active = 'NO')
OR
(host_config_roles <> host_active_roles)
);
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 17
18. Monitoring/Reporting (2)
Other views worth reporting from
M_LANDSCAPE_HOST_CONFIGURATION
• Detail information for each host (master, worker, standby, dynamic
tiering), - failover group, configuration and actual roles roles of
nameserver, indexserver and host.
M_DATA_VOLUMES
- Volume/filename for each database
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 18
19. Enterprise Monitoring
• Many configuration, status, and alerts can occur in distributed
HANA environment which require continuous monitoring.
Solutions can be automated via the OZSOFT HANA Management
Pack (HANA MP) for Microsoft SCOM and/or IT-Conductor
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 19
20. What’s next
Topics for future blogs:
- HANA DR with System Replication
- Dynamic Tiering – Data provision installation, extended
storage maintenance, and management
Author: Terry Kempis
Editor: Linh Nguyen
ITConductor.com 20