See the new enhancements in v9.4 which takes away the pain of guessing right wal_keep_segment
See the new time lagging replication capability in v9.4
Short intro to logical replication introduced in v9.4
How to tune slow running SQLs in PostgreSQL? See this to know (with screenshots) -
1. See the explain plan and analyze the slow running query
2. Some basic tips for tuning the query
This document provides instructions for setting up hot standby replication between a primary and secondary PostgreSQL database. It describes configuring WAL archiving on the primary, taking a backup of the primary to initialize the secondary, creating a recovery.conf file on the secondary, and testing replication. It also explains how to trigger a failover, switchover, and rebuild the primary database after a failover.
This document discusses PostgreSQL streaming replication and switchover/switchback capabilities. It covers limitations in earlier PostgreSQL versions, timelines, new features in version 9.3 that enable switchover/switchback without needing fresh backups, and things to know like using clean shutdown and the recovery.conf file. A demo of these features is promised at the end.
This document discusses PostgreSQL parameter tuning, specifically related to memory and optimizer parameters. It provides guidance on setting parameters like shared_buffer, work_mem, temp_buffer, maintenance_work_mem, random_page_cost, sequential_page_cost, and effective_cache_size to optimize performance based on hardware characteristics like available RAM and disk speed. It also covers force_plan parameters that can include or exclude certain query optimization techniques.
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
This technical presentation by EDB Dave Thomas, Systems Engineer provides an overview of:
1) BGWriter/Writer Process
2) Wall Writer Process
3) Stats Collector Process
4) Autovacuum Launch Process
5) Syslogger Process/Logger process
6) Archiver Process
7) WAL Send/Receive Processes
Ilya Kosmodemiansky - An ultimate guide to upgrading your PostgreSQL installa...PostgreSQL-Consulting
Even an experienced PostgreSQL DBA can not always say that upgrading between major versions of Postgres is an easy task, especially if there are some special requirements, such as downtime limitations or if something goes wrong. For less experienced DBAs anything more complex than dump/restore can be frustrating.
In this talk I will describe why we need a special procedure to upgrade between major versions, how that can be achieved and what sort of problems can occur. I will review all possible ways to upgrade your cluster from classical pg_upgrade to old-school slony or modern methods like logical replication. For all approaches, I will give a brief explanation how it works (limited by the scope of this talk of course), examples how to perform upgrade and some advice on potentially problematic steps. Besides I will touch upon such topics as integration of upgrade tools and procedures with other software — connection brokers, operating system package managers, automation tools, etc. This talk would not be complete if I do not cover cases when something goes wrong and how to deal with such cases.
hbaseconasia2017: Large scale data near-line loading method and architectureHBaseCon
This document proposes a read-write split near-line data loading method and architecture to:
- Increase data loading performance by separating write operations from read operations. A WriteServer handles write requests and loads data to HDFS to be read from by RegionServers.
- Control resources used by write operations to ensure read operations are not starved of resources like CPU, network, disk I/O, and handlers.
- Provide an architecture corresponding to Kafka and HDFS for streaming data from Kafka to HDFS to be loaded into HBase in a delayed manner.
- Include optimizations like task balancing across WriteServer slaves, prioritized compaction of small files, and customizable storage engines.
- Report test results showing one Write
This document summarizes a presentation about PostgreSQL replication. It discusses different replication terms like master/slave and primary/secondary. It also covers replication mechanisms like statement-based and binary replication. The document outlines how to configure and administer replication through files like postgresql.conf and recovery.conf. It discusses managing replication including failover, failback, remastering and replication lag. It also covers synchronous replication and cascading replication setups.
HBaseCon2017 Removable singularity: a story of HBase upgrade in PinterestHBaseCon
HBase is used to serve online facing traffic in Pinterest. It means no downtime is allowed. However, we were on HBase 94. To upgrade to latest version, we need to figure out a way to live upgrade while keeping Pinterest site live. Recently, we successfully upgrade 94 HBase cluster to 1.2 with no downtime. We made change to both Asynchbase and HBase server side. We will talk about what we did and how we did it. We will also talk about the finding in config and performance tuning we did to achieve low latency.
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...Command Prompt., Inc
Alex Alexander & Linas Virbalas
Hot standby and streaming replication will move the needle forward for high availability and scaling for a wide number of applications. Tungsten already supports clustering using warm standby. In this talk we will describe how to build clusters using the new PostgreSQL features and give our report from the trenches.
This talk will cover how hot standby and streaming replication work from a user perspective, then dive into a description of how to use them, taking Tungsten as an example. We'll cover the following issues:
* Configuration of warm standby and streaming replication
* Provisioning new standby instances
* Strategies for balancing reads across primary and standby database
* Managing failover
* Troubleshooting and gotchas
Please join us for an enlightening discussion a set of PostgreSQL features that are interesting to a wide range of PostgreSQL users.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
"Why use PgBouncer? It’s a lightweight, easy to configure connection pooler and it does one job well. As you’d expect from a talk on connection pooling, we’ll give a brief summary of connection pooling and why it increases efficiency. We’ll look at when not to use connection pooling, and we’ll demonstrate how to configure PgBouncer and how it works. But. Did you know you can also do this? 1. Scaling PgBouncer PgBouncer is single threaded which means a single instance of PgBouncer isn’t going to do you much good on a multi-threaded and/or multi-CPU machine. We’ll show you how to add more PgBouncer instances so you can use more than one thread for easy scaling. 2. Read-write / read only routing Using different pgBouncer databases you can route read-write traffic to the primary database and route read-only traffic to a number of standby databases. 3. Load balancing When we use multiple PgBouncer instances, load balancing comes for free. Load balancing can be directed to different standbys, and weighted according to ratios of load. 4. Silent failover You can perform silent failover during promotion of a new primary (assuming you have a VIP/DNS etc that always points to the primary). 5. And even: DoS prevention and protection from “badly behaved” applications! By using distinct port numbers you can provide database connections which deal with sudden bursts of incoming traffic in very different ways, which can help prevent the database from becoming swamped during high activity periods. You should leave the presentation wondering if there is anything PgBouncer can’t do."
The document discusses tuning MySQL server settings for performance. Some key points covered include:
- Settings are workload-specific and depend on factors like storage engine, OS, hardware. Tuning involves getting a few settings right rather than maximizing all settings.
- Monitoring tools like SHOW STATUS, SHOW INNODB STATUS, and OS tools can help evaluate performance and identify tuning opportunities.
- Memory allocation and settings like innodb_buffer_pool_size, key_buffer_size, query_cache_size are important to configure based on the workload and available memory.
This document discusses virtualization, monitoring, and replication of Perforce software. It provides recommendations for virtualizing Perforce, such as using vSphere 5 with certain network drivers. It also discusses monitoring Perforce processes and logs to detect performance issues. New replication features in upcoming Perforce releases are outlined, including commit/edge servers that allow work to be distributed across multiple VMs.
Apache Traffic Server is an open source HTTP proxy and caching server. It provides high performance content delivery through caching, request multiplexing, and connection pooling. The document discusses Traffic Server's history and features, including its multithreaded event-driven architecture, caching capabilities, clustering support, and extensive configuration options. It also addresses how Traffic Server can improve performance and ease operations through automatic restart, plugin extensions, and statistics collection.
OpenStack is rapidly gaining popularity with businesses as they realize the benefits of a private cloud architecture. This presentation was delivered by Dave Page, Chief Architect, Tools & Installers at EnterpriseDB & PostgreSQL Core Team member during PG Open 2014. He addressed some of the common components of OpenStack deployments, how they can affect Postgres servers, and how users might best utilize some of the features they offer when deploying Postgres, including:
• Different configurations for the Nova compute service
• Use of the Cinder block store
• Virtual networking options with Neutron
• WAL archiving with the Swift object store
HBase-2.0.0 has been a couple of years in the making. It is chock-a-block full of a long list of new features and fixes. In this session, the 2.0.0 release manager will perform the impossible, describing the release content inside the session time bounds.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
MySQL Server Backup, Restoration, and Disaster Recovery PlanningLenz Grimmer
Slides of Colin Charles and me talking at the MySQL Conference 2009: http://www.mysqlconf.com/mysql2009/public/schedule/detail/5664
This document provides an overview of key differences between SQL Server and PostgreSQL databases. It covers topics such as extensions, cost, case sensitivity, operating systems, processor configuration, write-ahead logging (WAL), checkpoints, disabling writes, page corruptions, MVCC, vacuum, database snapshots, system databases, tables, indexes, statistics, triggers, functions, security, backups, replication, imports/exports, maintenance, and monitoring. The document aims to help SQL Server DBAs understand how to administer and work with PostgreSQL databases.
Online MySQL Backups with Percona XtraBackupKenny Gryp
Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server, MySQL® and MariaDB®.
Percona XtraBackup provides:
* Fast and reliable backups
* Uninterrupted transaction processing during backups
* Savings on disk space and network bandwidth with better compression
* Automatic backup verification
* Higher uptime due to faster restore time
This talk will discuss the various different features of Percona XtraBackup, including:
* Full & Incremental Backups
* Compression, Streaming & Encryption of Backups
* Backing Up To The Cloud (Swift).
* Percona XtraDB Cluster / Galera Cluster.
* Percona Server Specific features
Linux internals for Database administrators at Linux Piter 2016PostgreSQL-Consulting
Input-output performance problems are on every day agenda for DBAs since the databases exist. Volume of data grows rapidly and you need to get your data fast from the disk and moreover - fast to the disk. For most databases there is a more or less easy to find checklist of recommended Linux settings to maximize IO throughput. In most cases that checklist is good enough. But it is always better to understand how it works, especially if you run into some corner-cases. This talk is about how IO in Linux works, how database pages travel from disk level to database own shared memory and back and what kind of mechanisms exist to control this. We will discuss memory structures, swap and page-out daemons, filesystems, schedullers and IO methods. Some fundamental differences in IO approaches between PostgreSQL, Oracle and MySQL will be covered.
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
This document discusses how organizations can save money on database management systems (DBMS) by moving from expensive commercial DBMS to more affordable open-source options like PostgreSQL. It notes that PostgreSQL has matured and can now handle mission critical workloads. The document recommends partnering with EnterpriseDB to take advantage of their commercial support and features for PostgreSQL. It highlights how customers have seen cost savings of 35-80% by switching to PostgreSQL and been able to reallocate funds to new business initiatives.
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsAshnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist shared some tips at FOSSASIA 2016 about how to design web-centric high-performance applications.
This document summarizes a presentation about PostgreSQL replication. It discusses different replication terms like master/slave and primary/secondary. It also covers replication mechanisms like statement-based and binary replication. The document outlines how to configure and administer replication through files like postgresql.conf and recovery.conf. It discusses managing replication including failover, failback, remastering and replication lag. It also covers synchronous replication and cascading replication setups.
HBaseCon2017 Removable singularity: a story of HBase upgrade in PinterestHBaseCon
HBase is used to serve online facing traffic in Pinterest. It means no downtime is allowed. However, we were on HBase 94. To upgrade to latest version, we need to figure out a way to live upgrade while keeping Pinterest site live. Recently, we successfully upgrade 94 HBase cluster to 1.2 with no downtime. We made change to both Asynchbase and HBase server side. We will talk about what we did and how we did it. We will also talk about the finding in config and performance tuning we did to achieve low latency.
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...Command Prompt., Inc
Alex Alexander & Linas Virbalas
Hot standby and streaming replication will move the needle forward for high availability and scaling for a wide number of applications. Tungsten already supports clustering using warm standby. In this talk we will describe how to build clusters using the new PostgreSQL features and give our report from the trenches.
This talk will cover how hot standby and streaming replication work from a user perspective, then dive into a description of how to use them, taking Tungsten as an example. We'll cover the following issues:
* Configuration of warm standby and streaming replication
* Provisioning new standby instances
* Strategies for balancing reads across primary and standby database
* Managing failover
* Troubleshooting and gotchas
Please join us for an enlightening discussion a set of PostgreSQL features that are interesting to a wide range of PostgreSQL users.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
"Why use PgBouncer? It’s a lightweight, easy to configure connection pooler and it does one job well. As you’d expect from a talk on connection pooling, we’ll give a brief summary of connection pooling and why it increases efficiency. We’ll look at when not to use connection pooling, and we’ll demonstrate how to configure PgBouncer and how it works. But. Did you know you can also do this? 1. Scaling PgBouncer PgBouncer is single threaded which means a single instance of PgBouncer isn’t going to do you much good on a multi-threaded and/or multi-CPU machine. We’ll show you how to add more PgBouncer instances so you can use more than one thread for easy scaling. 2. Read-write / read only routing Using different pgBouncer databases you can route read-write traffic to the primary database and route read-only traffic to a number of standby databases. 3. Load balancing When we use multiple PgBouncer instances, load balancing comes for free. Load balancing can be directed to different standbys, and weighted according to ratios of load. 4. Silent failover You can perform silent failover during promotion of a new primary (assuming you have a VIP/DNS etc that always points to the primary). 5. And even: DoS prevention and protection from “badly behaved” applications! By using distinct port numbers you can provide database connections which deal with sudden bursts of incoming traffic in very different ways, which can help prevent the database from becoming swamped during high activity periods. You should leave the presentation wondering if there is anything PgBouncer can’t do."
The document discusses tuning MySQL server settings for performance. Some key points covered include:
- Settings are workload-specific and depend on factors like storage engine, OS, hardware. Tuning involves getting a few settings right rather than maximizing all settings.
- Monitoring tools like SHOW STATUS, SHOW INNODB STATUS, and OS tools can help evaluate performance and identify tuning opportunities.
- Memory allocation and settings like innodb_buffer_pool_size, key_buffer_size, query_cache_size are important to configure based on the workload and available memory.
This document discusses virtualization, monitoring, and replication of Perforce software. It provides recommendations for virtualizing Perforce, such as using vSphere 5 with certain network drivers. It also discusses monitoring Perforce processes and logs to detect performance issues. New replication features in upcoming Perforce releases are outlined, including commit/edge servers that allow work to be distributed across multiple VMs.
Apache Traffic Server is an open source HTTP proxy and caching server. It provides high performance content delivery through caching, request multiplexing, and connection pooling. The document discusses Traffic Server's history and features, including its multithreaded event-driven architecture, caching capabilities, clustering support, and extensive configuration options. It also addresses how Traffic Server can improve performance and ease operations through automatic restart, plugin extensions, and statistics collection.
OpenStack is rapidly gaining popularity with businesses as they realize the benefits of a private cloud architecture. This presentation was delivered by Dave Page, Chief Architect, Tools & Installers at EnterpriseDB & PostgreSQL Core Team member during PG Open 2014. He addressed some of the common components of OpenStack deployments, how they can affect Postgres servers, and how users might best utilize some of the features they offer when deploying Postgres, including:
• Different configurations for the Nova compute service
• Use of the Cinder block store
• Virtual networking options with Neutron
• WAL archiving with the Swift object store
HBase-2.0.0 has been a couple of years in the making. It is chock-a-block full of a long list of new features and fixes. In this session, the 2.0.0 release manager will perform the impossible, describing the release content inside the session time bounds.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
MySQL Server Backup, Restoration, and Disaster Recovery PlanningLenz Grimmer
Slides of Colin Charles and me talking at the MySQL Conference 2009: http://www.mysqlconf.com/mysql2009/public/schedule/detail/5664
This document provides an overview of key differences between SQL Server and PostgreSQL databases. It covers topics such as extensions, cost, case sensitivity, operating systems, processor configuration, write-ahead logging (WAL), checkpoints, disabling writes, page corruptions, MVCC, vacuum, database snapshots, system databases, tables, indexes, statistics, triggers, functions, security, backups, replication, imports/exports, maintenance, and monitoring. The document aims to help SQL Server DBAs understand how to administer and work with PostgreSQL databases.
Online MySQL Backups with Percona XtraBackupKenny Gryp
Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server, MySQL® and MariaDB®.
Percona XtraBackup provides:
* Fast and reliable backups
* Uninterrupted transaction processing during backups
* Savings on disk space and network bandwidth with better compression
* Automatic backup verification
* Higher uptime due to faster restore time
This talk will discuss the various different features of Percona XtraBackup, including:
* Full & Incremental Backups
* Compression, Streaming & Encryption of Backups
* Backing Up To The Cloud (Swift).
* Percona XtraDB Cluster / Galera Cluster.
* Percona Server Specific features
Linux internals for Database administrators at Linux Piter 2016PostgreSQL-Consulting
Input-output performance problems are on every day agenda for DBAs since the databases exist. Volume of data grows rapidly and you need to get your data fast from the disk and moreover - fast to the disk. For most databases there is a more or less easy to find checklist of recommended Linux settings to maximize IO throughput. In most cases that checklist is good enough. But it is always better to understand how it works, especially if you run into some corner-cases. This talk is about how IO in Linux works, how database pages travel from disk level to database own shared memory and back and what kind of mechanisms exist to control this. We will discuss memory structures, swap and page-out daemons, filesystems, schedullers and IO methods. Some fundamental differences in IO approaches between PostgreSQL, Oracle and MySQL will be covered.
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
This document discusses how organizations can save money on database management systems (DBMS) by moving from expensive commercial DBMS to more affordable open-source options like PostgreSQL. It notes that PostgreSQL has matured and can now handle mission critical workloads. The document recommends partnering with EnterpriseDB to take advantage of their commercial support and features for PostgreSQL. It highlights how customers have seen cost savings of 35-80% by switching to PostgreSQL and been able to reallocate funds to new business initiatives.
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsAshnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist shared some tips at FOSSASIA 2016 about how to design web-centric high-performance applications.
NGINX Plus PLATFORM For Flawless Application DeliveryAshnikbiz
Flawless Application Delivery using Nginx Plus
By leveraging these latest features:
• Support for HTTP/2 standard
• Thread pools and socket sharding and how it can help improve performance
• NTLM support and new TCP security enhancements
• Advanced NGINX Plus monitoring, management and visibility of health & load checks
Catch this exclusive Google Hangout live!
November 4th, 2015 | 2.00-2.30PM IST | 4.30-5.00PM SGT
About the speaker: Sandeep Khuperkar, Director and CTO at Ashnik will be heading this session. He is an author, enthusiast and community moderator at opensource.com. He is also member of Open Source Initiative, Linux Foundation and Open Source Consortium Of India.
FOSSASIA 2015 - 10 Features your developers are missing when stuck with Propr...Ashnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist presented at FOSSASIA 2015 about the features of open source database like PostgreSQL which are missed by developers stuck on proprietary databases.
10 Features you would love as an Open Source developer!
- New JSON Datatype
- Vast set of datatypes supported
- Rich support for foreign Data Wrap
- User Defined Operators
- User Defined Extensions
- Filter Based Indexes or Partial Indexes
- Granular control of parameters at User, Database, Connection or Transaction Level
- Use of indexes to get statistics
- JDBC API for COPY -Command
- Full Text Search
Countdown to PostgreSQL v9.5 - Foriegn Tables can be part of Inheritance Tree Ashnikbiz
Distributed databases and horizontal scale up is one of the key demands in today's date. PostgreSQL already had some vertical scaling features and horizontal scale-up by adding disks and table partitioning/child tables. With release of v9.5, PostgreSQL will get basic foundation for native sharing capability. From v9.5 Foreign Tables will be able to participate in Inheritance Tree as a child or parent table i.e. one can have table partitions residing on different system.
In our countdown to v9.5 series of hangouts, we will be covering some of the great features of PostgreSQL v9.5 and what is their real life applicability. In the first hangout in this series we will be talking about-
- The feature of foreign partitions/child tables
- Syntax and usage
- EXPLAIN plan demo
- Use cases and benefits
Join us for more and send us your queries on [email protected]
Building Hybrid data cluster using PostgreSQL and MongoDBAshnikbiz
This document describes building a hybrid data cluster with MongoDB and PostgreSQL. It discusses using PostgreSQL's Foreign Data Wrapper (FDW) to allow PostgreSQL to query and join data stored in MongoDB collections. The document provides steps to set up a sharded MongoDB cluster, install the MongoDB FDW extension in PostgreSQL, and create foreign tables in PostgreSQL that map to MongoDB collections to allow complex SQL queries on MongoDB data. Live demonstrations are provided of inserting, updating, querying data across the hybrid cluster.
It has just been a few months since the PostgreSQL9.5 is released. We have got some of our customers excited about great new features and performance enhancements in v9.5. But here we are already taking a peak into the next version, and we find it awesome! One of the most awaited features – parallelism makes it to Postgres. The infrastructure for parallelism has been added over last few releases but the first parallel operation in query execution will be seen only in v9.6.
Building Data Integration and Transformations using PentahoAshnikbiz
This presentation will showcase the Data Integration capabilities of Pentaho which helps in building data transformations, through two demonstrations:
- How to build your first transformation to extract, transform and blend the data from various data sources
- How to add additional steps and filters to your transformation
Architecture for building scalable and highly available Postgres ClusterAshnikbiz
As PostgreSQL has made way into business critical applications, many customers who are using Oracle RAC for high availability and load balancing have asked for similar functionality for using PostgreSQL.
In this Hangout session we would discuss architecture and alternatives, based on real life experience, for achieving high availability and load balancing functionality when you deploy PostgreSQL. We will also present some of the key tools and how to deploy them for effectiveness of this architecture.
PgDay Asia 2016 - Security Best Practices for your Postgres DeploymentAshnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source database evangelist talked about the "Security Best Practices for your Postgres Deployment" at the recent pgDAy Asia event held in Singapore in March 2016.
Key areas he presented were:
- Security Model
- Security Features in Postgres
- Securing the access
- Avoiding common attacks
- Access Control and Securing data
- Logging and Auditing
- Patching – OS and PostgreSQL
A powerful feature in Postgres called Foreign Data Wrappers lets end users integrate data from MongoDB, Hadoop and other solutions with their Postgres database and leverage it as single, seamless database using SQL.
Use of these features has skyrocketed since EDB released to the open source community new FDWs for MongoDB, Hadoop and MySQL that support both read and write capabilities. Now greatly enhanced, FDWs enable integrating data across disparate deployments to support new workloads, expanded development goals and harvesting greater value from data.
Learn more about Foreign Data Wrappers (FDWs) and Postgres with Sameer Kumar, Database Consultant from Ashnik.
Target Audience: This presentation is intended for IT Professionals seeking to do more with Postgres in his every day projects and build new applications.
PostgreSQL ofrece varias opciones para la replicación de datos, incluyendo replicación nativa y herramientas de terceros. La replicación nativa incluye Warm Standby desde la versión 8.3, permitiendo la recuperación continua, y Hot Standby/Streaming Replication desde la versión 9.0, permitiendo consultas de solo lectura en los nodos esclavos. Herramientas como Slony-I y RubyRep permiten replicación asíncrona de tipo maestro-esclavo y maestro-maestro de manera independiente de la versión de
PostgreSQL is an open source object-relational database system that has been in development since 1982. It supports Linux, Windows, Mac OS X, and Solaris and can be installed using package managers or installers. PostgreSQL provides many features including procedural languages, functions, indexes, triggers, multi-version concurrency control, and point-in-time recovery. It also has various administration and development tools.
Out of the box replication in postgres 9.4Denish Patel
This document provides an overview of setting up out of the box replication in PostgreSQL 9.4 without third party tools. It discusses write-ahead logs (WAL), replication slots, pg_basebackup, and pg_receivexlog. The document then demonstrates setting up replication on VMs with pg_basebackup to initialize a standby server, configuration of primary and standby servers, and monitoring of replication.
This document discusses PostgreSQL replication. It provides an overview of replication, including its history and features. Replication allows data to be copied from a primary database to one or more standby databases. This allows for high availability, load balancing, and read scaling. The document describes asynchronous and synchronous replication modes.
Big Data Business Transformation - Big Picture and BlueprintsAshnikbiz
Kaustubh Patwardhan, Head of Strategy and Business Development at Ashnik presents the big picture and blueprints of a big data journey for enterprises. The Value of Big Data – Machine Learning and its big impact. He covers a spectrum of Big Data use cases where right data storage, integration & data consolidation plays a big role.
This document provides an agenda and background information for a presentation on PostgreSQL. The agenda includes topics such as practical use of PostgreSQL, features, replication, and how to get started. The background section discusses the history and development of PostgreSQL, including its origins from INGRES and POSTGRES projects. It also introduces the PostgreSQL Global Development Team.
This presentation is for those who are familiar with databases and SQL, but want to learn how to move processing from their applications into the database to improve consistency, administration, and performance. Topics covered include advanced SQL features like referential integrity constraints, ANSI joins, views, rules, and triggers. The presentation also explains how to create server-side functions, operators, and custom data types in PostgreSQL.
This document discusses streaming replication in PostgreSQL. It covers how streaming replication works, including the write-ahead log and replication processes. It also discusses setting up replication between a primary and standby server, including configuring the servers and verifying replication is working properly. Monitoring replication is discussed along with views and functions for checking replication status. Maintenance tasks like adding or removing standbys and pausing replication are also mentioned.
The document discusses various stages in architecting an application for the cloud as it grows in scale and complexity.
Stage 1 involves a simple architecture suitable for startups with low overhead. Stage 2 adds redundancy as the business grows. Stage 3 requires the addition of load balancers and more servers as publicity increases load. Stage 4 requires database replication and partitioning as single databases can no longer handle the load. Later stages involve rearchitecting the application and databases to support further scaling through techniques like data partitioning, database clustering, and optimizing code and resources.
This document discusses stored procedures in SQL Server. It defines stored procedures as subroutines that are stored in a database's data dictionary and can be used to perform repetitive tasks. The document provides steps for creating a stored procedure in SQL Server Management Studio, including specifying a name, parameters, and body. It also lists some advantages of stored procedures like precompiled execution for improved performance when called repeatedly and more secure control over user permissions.
Best Practices: Migrating a Postgres Production Database to the CloudEDB
Do you want to learn how you can move to the Cloud? This presentation will provide the solid ideas and approaches you need to plan and execute a successful migration of a production Postgres database to the Cloud.
LVOUG meetup #4 - Case Study 10g to 11gMaris Elsins
My presentation on a case study of 10g to 11g upgrade at LVOUG meetup #4 in 2012. Includes preserving execution plans by exporting them from 10g and importing as SQL Plan Baselines in 11gR2
How we deployed Piwik web analytics system to handle a huge amount of unpredicted traffic, adding some cloud and modern scalability techniques. files:https://github.com/lorieri/piwik-presentation
Magento scalability from the trenches (Meet Magento Sweden 2016)Divante
This document discusses strategies for scaling a Magento e-commerce platform. It recommends first using vertical scaling by optimizing code and enabling caching before adding additional application and database servers through horizontal scaling. Specific optimizations discussed include using Redis for caching, Varnish for page caching, separating the database to its own server, enabling flat catalog indexing, and implementing master-slave database replication. Proper monitoring tools like New Relic and load testing are also emphasized for identifying bottlenecks during the scaling process.
The Pensions Trust - VM Backup Experiencesglbsolutions
VMware Backup Experiences Darren Bull Business Support Manager, The Pensions Trust
The Pensions Trust previously used tape backups and manual server recovery for disaster recovery that took 48 hours. They virtualized servers with VMware which simplified backups and DR. They tried EMC's Mirrorview replication but it fell behind and failed. They implemented DataDomain deduplicated storage for backups and replication which achieved 40x storage savings and offsite replication within 24 hours. For backups they moved from BackupExec to Veeam which reduced backup times from 24 hours to minutes and allowed DR testing recovery in under 6 hours. In conclusion, newer backup software and deduplicated storage provided reliable, efficient backups and disaster recovery meeting their 24 hour
This document discusses various topics related to Oracle Data Guard configurations including:
- Choosing the appropriate protection mode based on bandwidth, latency, and data loss tolerance.
- Performance tuning techniques such as enabling SYNC parallelization in 11g and adjusting redo transport parameters.
- Best practices for role transitions like switchovers and failovers, including using flashback and real-time redo apply.
- Parameters for corruption detection and techniques for automatic block repair using standby databases.
This presentation demonstrates how to pinpoint performance bottlenecks in SAP BusinessObjects reports and dashboards. It explores tools that can collect and analyze data from the various components involved, such as the front-end applications, servers, databases, and network. Specific examples are provided on how to use traces, logs, and other tools to measure and break down the timing of different workflows involving Web Intelligence, Design Studio, and SAP HANA. The goal is to identify where time is spent and determine if improvements can be made to content design, system resources, configuration, or queries.
The document discusses moving OpenStack to structured state management. It outlines use cases from deployers, developers, and users around ensuring reliability, debugging state transitions, and optimizing resource scheduling. Currently, state transitions are ad-hoc and distributed, making them hard to follow, recover from, and extend. The document proposes prototyping an orchestration solution to consolidate state management and make transitions and recoveries clearly defined. Key benefits would include less scattered state and recovery logic, faster provisioning, and improved scheduling capabilities.
Multi Source Replication With MySQL 5.7 @ VerisureKenny Gryp
Verisure migrated their data warehouse from using Tungsten Replicator to native multi-source replication in MySQL 5.7 to simplify operations. They loaded data from production shards into the new data warehouse setup using XtraBackup backups and improved replication capacity with MySQL's parallel replication features. Some issues were encountered with replication lag reporting and crashes during the upgrade but most were resolved. Monitoring and management tools also required updates to support the new multi-source replication configuration.
Start Counting: How We Unlocked Platform Efficiency and Reliability While Sav...VMware Tanzu
The document describes how Manulife improved the efficiency and reliability of their Pivotal Cloud Foundry (PCF) platforms while saving over $730,000. Key changes included implementing a scheduler to stop non-critical apps on weekends, switching from internal to external blob storage, changing Diego cell VM types to more optimized models, and tuning various foundation configurations. These changes resulted in estimated annual savings of $40,000 from scheduling, $21,500 from external blob storage, and over $1 million from Diego cell and foundation changes, for a total of over $1 million in savings.
WATS 2014 WA Agents Overview - CA Workload Automation Technology Summit (WATS...Extra Technology
Please contact us via our contact page - http://www.extratechnology.com/contact - to learn more about CA Technologies' Workload Automation products and to book your place at the next WATS event.
This 'CA Workload Automation Agents Overview' presentation by CA's John Crespin was delivered at 'CA Workload Automation Technology Summit (WATS) 2014' in London, October 2014.
CA's sophisticated workload automation agent technology is shared by CA's AutoSys, CA 7, dSeries and ESP engines.
Since 2013 the UK User Group meetings for dSeries, AutoSys and CA7 are incorporated into WATS. Customers agree that WATS is a must-attend event for the CA Workload Automation community, showcasing CA's Workload Automation solutions.
WATS is a free-of-charge event, sponsored and arranged by Workload Automation experts Extra Technology. It features guest speakers from CA Technologies and CA Community Group Members.
#dSeries #AutoSys #ESP #CA7 #WATS #WorkloadAutomation @CAinc @CA_Community @extratechnology #database #webinterface #Java
- The document discusses managing a large OLTP database at PayPal, including capacity management, planned maintenance, performance management, and troubleshooting. It provides details on monitoring the database infrastructure, conducting maintenance such as patching and switchovers, and optimizing performance for Oracle RAC environments. The goal is to support business needs and provide uninterrupted service through proactive management of the database tier.
IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...Daniel Martin
Aetna uses IBM's DB2 Analytics Accelerator to improve the performance of long-running reports on its DB2 database. The accelerator offloads eligible queries to the Netezza appliance, reducing query times from hours to seconds. Aetna saw a 4x compression rate on its data and was able to load 1.5 billion rows in 15 minutes. Reports that previously timed out after 82 minutes now return results in 27 seconds, improving business users' ability to analyze data.
GWAVACon 2015: Microsoft MVP - Exchange Server Migrations & UpdatesGWAVA
This document discusses Exchange Server updates and migrations. It provides guidance on updating Exchange Servers, including why updates are important, the different types of updates, and the general update process. It also outlines the general server migration process, including preparing Active Directory, installing new Exchange Servers, configuring load balancing, testing the new environment, changing DNS records, and decommissioning legacy servers. Questions are taken at the end if time allows.
The following article is the best simplified steps that will help you install and configure LEMP stack. its written by one of the genius engineers or Rootgate.com
Oracle Flashback Query allows users to recover data to a previous point in time using the System Change Number (SCN) or timestamp. Setting up Flashback Query involves determining the undo retention period, creating an undo tablespace, and granting privileges to users. The DBMS_FLASHBACK package implements Flashback Query procedures like ENABLE_AT_TIME and DISABLE. DBMS_RESUMABLE allows long-running operations to suspend and resume if errors occur. The AFTER SUSPEND trigger notifies DBAs of suspended operations. Export/Import now supports Flashback Query parameters and resuming space allocation operations.
Nginx is a lightweight web server that was created in 2002 to address the C10K problem of scaling to 10,000 concurrent connections. It uses an asynchronous event-driven architecture that uses less memory and CPU than traditional multi-threaded models. Key features include acting as a reverse proxy, load balancer, HTTP cache, and web server. Nginx has grown in popularity due to its high performance, low memory usage, simple configuration, and rich feature set including modules for streaming, caching, and dynamic content.
CloudDBOps is Ashnik's automation focussed UI tool which can help you seamlessly install and configure multiple technologies like Postgres, MongoDB, Elastic(ELK), Monitoring
Database automation tools are needed to automate repetitive tasks, reduce risks from manual errors, improve alignment between business and IT, and allow organizations to move faster. They help keep systems running smoothly through monitoring, provisioning, backup/restore, maintenance, security, and more. When choosing a tool, organizations should consider ease of implementation, breadth of use cases covered, ability to work on-premises and in the cloud, long-term costs, customizability, learning curve, and do a trial run.
Autoscaling in Kubernetes allows applications to automatically scale resources up or down based on metrics like CPU usage. It addresses challenges with traditional autoscaling approaches by being platform independent and scaling pods quickly using the Horizontal Pod Autoscaler. The document outlines an architecture that sets autoscaling to increase application pods when CPU usage crosses 50%, with a minimum of 1 and maximum of 3 pods. It then demonstrates this through scenarios of idle and heavy loads, ramping up users over 10 seconds to test the autoscaling capabilities.
Why and how to use Kubernetes for scaling of your multi-tier (n-tier) appli...Ashnikbiz
Kubernetes can be used to scale multi-tier applications by providing tools for container orchestration, including service discovery, load balancing, storage orchestration, and self-healing capabilities. It addresses challenges with traditional monolithic architectures by allowing microservices that are isolated, declarative, and can autoscale horizontally and vertically through features like horizontal pod autoscaling, vertical pod autoscaling, and cluster autoscaling. This allows applications to dynamically add or remove pods and nodes as needed to meet changing workload demands.
Zero trust in a multi tenant environment Ashnikbiz
Vault provides secure multi-tenancy by allowing the creation of namespaces that isolate "Vaults within a Vault". Each namespace can have independent authentication methods, secret engines, policies, identities and access management. Vault also enables API-driven encryption through secret engines like Transit and unified identities across multiple environments through its identity system. These capabilities allow Vault to securely store, restrict access to, and manage encryption of secrets and keys for multi-tenant infrastructure.
Deploy and automate ‘Secrets Management’ for a multi-cloud environmentAshnikbiz
Over the years, there has been a massive transition from on-premise environments to hybrid or multi-cloud, resulting in a significant increase in the adoption of cloud-native practices and technologies. However, while cloud-native methodologies offer growing benefits and are instrumental to digitalization, they can pose considerable challenges in managing secrets.
Secrets management aims to solve a lack of visibility and control on handling these highly-trusted credentials.
Deploy, move and manage Postgres across cloud platformsAshnikbiz
Running applications in a hybrid set-up creates complexities that can increase downtime and maintenance. PostgreSQL runs across virtual, cloud, and container environments; minimizing complexity without sacrificing the performance, so you can take control. Being today’s undisputed leader of relational databases for new and modern applications, Postgres’ tools and features will enable you to swiftly deploy, move and manage your database across platforms.
Deploy, move and manage Postgres across cloud platformsAshnikbiz
Running applications in a hybrid setup creates complexities that can increase downtime and maintenance. PostgreSQL runs across virtual, cloud, and container environments; minimizing complexity without sacrificing the performance, so you can take control. Being today’s undisputed leader of relational databases for new and modern applications, Postgres’ tools and features will enable you to swiftly deploy, move and manage your database across platforms.
Webinar Covers:
Multi-cloud strategy and trends
How EDB Postgres can pillar your cloud platform
Use cases: Postgres and its tools on-premises and multi-cloud platforms
Demo: Using Postgres tools on-premises and for diverse cloud platforms – handling back-up, monitoring, and ensuring Business Continuity Process (BCP)
The Best Approach For Multi-cloud Infrastructure Provisioning-2Ashnikbiz
The webinar covers the best approach for multi-cloud infrastructure provisioning. It discusses the complexities of multi-cloud environments and how Terraform can help with adoption. The webinar features a demo of provisioning cloud infrastructure with Terraform. It also addresses the business drivers for multi-cloud, including improving customer experience, and the challenges of multi-cloud such as the need for multiple skills and complex deployments.
The Best Approach For Multi-cloud Infrastructure ProvisioningAshnikbiz
This document discusses challenges of cloud computing and how HashiCorp's products address them. It introduces Cloud 2.0 as needing a unified control plane across networking, security, operations, and development to manage applications across private and public clouds. HashiCorp's products like Terraform for infrastructure as code, Vault for secrets management, and Consul for service discovery provide a full stack to operate in modern, dynamic cloud environments.
Which PostgreSQL is right for your multi cloud strategy? P2Ashnikbiz
The adoption of PostgreSQL in enterprises is becoming a strategic choice, more so with the adoption of Multi-Cloud now becoming a need for enterprise deployment. This availability creates multiple combinations of deployment options for you. So, it is important to identify the right strategy fitting into your organization’s needs.
Which PostgreSQL is right for your multi cloud strategy? P1Ashnikbiz
This webinar discusses strategies for using PostgreSQL in a multi-cloud environment. It will cover the different PostgreSQL deployment options for multi-cloud, including PostgreSQL-as-a-service, containerized PostgreSQL, and PostgreSQL on infrastructure-as-a-service. The webinar will also demonstrate how to automate deploying and scaling PostgreSQL. Key considerations for choosing a PostgreSQL option for multi-cloud include manageability, transportability, and automation.
Reduce the complexities of managing Kubernetes clusters anywhere 2Ashnikbiz
Learn how Kubernetes has become a critical component for deploying applications on multi-platform / multi-cloud environments and how to manage and monitor clusters running Mirantis Kubernetes Engine (formerly Docker Enterprise) using Mirantis Container Cloud, AWS, VMware and other providers.
Reduce the complexities of managing Kubernetes clusters anywhereAshnikbiz
Learn how Kubernetes has become a critical component for deploying applications on multi-platform / multi-cloud environments and how to manage and monitor clusters running Mirantis Kubernetes Engine (formerly Docker Enterprise) using Mirantis Container Cloud, AWS, VMware and other providers.
Enhance your multi-cloud application performance using Redis Enterprise P2Ashnikbiz
This document provides an overview of Redis Enterprise. It discusses how Redis Enterprise is an in-memory multi-model database built on open source Redis that supports high-performance operational, analytics, and hybrid use cases. It offers deployment options including cloud, on-premises, and Kubernetes and supports a wide range of modern use cases including caching, transactions, streaming, messaging, and analytics workloads. Redis Enterprise provides features like high availability, security, and support for Redis modules to extend its capabilities.
Enhance your multi-cloud application performance using Redis Enterprise P1Ashnikbiz
Redis Enterprise can help enhance performance for multi-cloud applications. The webinar covered challenges of multi-cloud environments and how Redis is used, including a demo of setting up Redis across different platforms. It discussed how 72% of customers in Southeast Asia and India are adopting multi-cloud and the business benefits like improved experience, flexibility and reduced time to launch products. Challenges of multi-cloud like complex architecture and application performance were also reviewed.
Gain multi-cloud versatility with software load balancing designed for cloud-...Ashnikbiz
Over 50% organizations today are changing how they develop applications to support their digital transformation goals, and a multi-cloud strategy often plays a big role in that. For many organizations, it’s just not practical to be tied to one cloud anymore, given the flexibility of choosing the right cloud for each application.
Gain multi-cloud versatility with software load balancing designed for cloud-...Ashnikbiz
Over 50% organizations today are changing how they develop applications to support their digital transformation goals, and a multi-cloud strategy often plays a big role in that.
Enterprise-class security with PostgreSQL - 1Ashnikbiz
For businesses that handle personal data everyday, the security aspect of their database is of utmost importance.
With an increasing number of hack attacks and frauds, organizations want their open source databases to be fully equipped with the top security features.
Enterprise-class security with PostgreSQL - 2Ashnikbiz
For businesses that handle personal data everyday, the security aspect of their database is of utmost importance.
With an increasing number of hack attacks and frauds, organizations want their open source databases to be fully equipped with the top security features.
Automation Hour 1/28/2022: Capture User Feedback from AnywhereLynda Kane
Slide Deck from Automation Hour 1/28/2022 presentation Capture User Feedback from Anywhere presenting setting up a Custom Object and Flow to collection User Feedback in Dynamic Pages and schedule a report to act on that feedback regularly.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Hands On: Create a Lightning Aura Component with force:RecordDataLynda Kane
Slide Deck from the 3/26/2020 virtual meeting of the Cleveland Developer Group presentation on creating a Lightning Aura Component using force:RecordData.
Learn the Basics of Agile Development: Your Step-by-Step GuideMarcel David
New to Agile? This step-by-step guide is your perfect starting point. "Learn the Basics of Agile Development" simplifies complex concepts, providing you with a clear understanding of how Agile can improve software development and project management. Discover the benefits of iterative work, team collaboration, and flexible planning.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Leading AI Innovation As A Product Manager - Michael JidaelMichael Jidael
Unlike traditional product management, AI product leadership requires new mental models, collaborative approaches, and new measurement frameworks. This presentation breaks down how Product Managers can successfully lead AI Innovation in today's rapidly evolving technology landscape. Drawing from practical experience and industry best practices, I shared frameworks, approaches, and mindset shifts essential for product leaders navigating the unique challenges of AI product development.
In this deck, you'll discover:
- What AI leadership means for product managers
- The fundamental paradigm shift required for AI product development.
- A framework for identifying high-value AI opportunities for your products.
- How to transition from user stories to AI learning loops and hypothesis-driven development.
- The essential AI product management framework for defining, developing, and deploying intelligence.
- Technical and business metrics that matter in AI product development.
- Strategies for effective collaboration with data science and engineering teams.
- Framework for handling AI's probabilistic nature and setting stakeholder expectations.
- A real-world case study demonstrating these principles in action.
- Practical next steps to begin your AI product leadership journey.
This presentation is essential for Product Managers, aspiring PMs, product leaders, innovators, and anyone interested in understanding how to successfully build and manage AI-powered products from idea to impact. The key takeaway is that leading AI products is about creating capabilities (intelligence) that continuously improve and deliver increasing value over time.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
Automation Dreamin' 2022: Sharing Some Gratitude with Your UsersLynda Kane
Slide Deck from Automation Dreamin'2022 presentation Sharing Some Gratitude with Your Users on creating a Flow to present a random statement of Gratitude to a User in Salesforce.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
2. A quick recap!
Earlier we saw:
- What is Streaming Replication
- How to setup of Streaming Replication in v9.3
- How v9.3 enhancements made switchover and
switchback easier
- How v9.3 enhancements ease the setup of
Replication using pg_basebackup
3. What are we going to do today
- See the new enhancements in v9.4 which takes
away the pain of guessing right
wal_keep_segment
- See the new time lagging replication capability
in v9.4
- Short intro to logical replication introduced in
v9.4
4. Parameter Changes in v9.4
- New recovery.conf parameters:
- primary_slot_name
- recovery_min_apply_delay
- New parameter postgresql.conf
- max_replication_slot
- New parameter values for postgresql.conf
- wal_level can have a value logical
6. How DBAs do it today
- Guess a proper value for wal_keep_segment
based on transaction volume
- Keep monitoring the transaction rate
- Increase the wal_keep_segment proactively
7. What if you ‘guessed’ a wrong value
- A smaller value means replication may go out of sync
- Need to rebuild the secondary node from a base backup of
Primary node
- Setup the replication again
- Guess the ‘wal_keep_segment’ value again
- A larger value means you might be wasting storage space
- Rebuild replication if secondary server goes down
- Archived WALs to avoid these issues = more storage
8. How is that going to change
- Create a replication slot on primary server
- Add it in recovery.conf on secondary server
- Primary server will keep WAL files unless the
server using the replication slot has got them
- No guess work!
- If secondary server goes down pending WALs
are still kept with Primary Server
9. Caveats
- If the secondary server goes down for a
long time
- WAL files will continue to accumulate on
primary server
- Replication slot needs to be dropped
manually in such cases
11. Why would you need it?
- Ever tried Point-in-Time recovery?
- Stop the Production Database
- Restore from a backup
- Reapply the transaction log/archive WAL since the backup
- Stop at the time just before the application issue/bug
introduced data inconsistency/corruption
- What if the Backup size is huge?
- What if there are too many archived WAL to be applied
- Higher Recovery Time = Higher Down Time = Loss of
business
12. Setup a time-lagging DR
- Setup a time-lagging DR in PostgreSQL v9.4 with acceptable
amount of time-lag. Let’s say 2 hours
- If there is a need for Point-in-Time recovery
- Stop Primary server
- Apply only pending WALs (not since last backup but only
2hours)
- Stop recovery before the point of corruption
- Promote secondary server to be primary
- Change connection conf
- Less time taken to bring up the Server = Reduced the Loss of
Business
13. Backdated Reporting and Time-travel Queries
- Do correlation/comparative queries to check
profit margin as compared to yesterday
- Pull data from Primary Server and Secondary
Server lagging by a day
- Pull reports from yesterday’s database
- Pause recovery on secondary and pull reports
- Reduces the downtime needed for Primary DB
for end of day reporting
14. Demo
- Primary Server running on port 5532 on localhost
- Secondary Server running on port 6532 on localhost
- postgresql.conf On primary
- max_replication_slot = 2
- max_wal_senders = 2
- wal_level = hot_standby
- archive_mode=off #no archive setup
- Create a replication slot on primary
- select * from create_physical_replication_slot('testingv94');
15. Demo
- recovery.conf on Secondary Server
- standby_mode = on
- primary_conninfo = 'host=127.0.0.1 port=5532
user=postgres'
- primary_slot_name = 'testingv94'
- recovery_min_apply_delay = 1min
- postgresql.conf on secondary
- hot_standby_mode = on