The document discusses the capabilities of RMAN, the Oracle database backup and recovery tool. It notes that RMAN offers flexibility, knowledge of database internals, data file checking, and quick recovery and cloning processes. While the syntax can be complex and there is a lack of practical knowledge, RMAN allows for efficient backups in various forms including incremental, retention settings, compression, and automatic control file backups. RMAN scripts can implement backup schedules and perform cleanup of backups and archive logs. RMAN also enables restore, recovery, point-in-time recovery, and bare database recovery. Control files store limited backup information locally while catalogs centralize information but require a catalog database.
Oracle Database Backups and Disaster Recovery @ AutodeskAlan Williams
Alan Williams of Autodesk presented on Oracle Database backups and disaster recovery. Autodesk uses Oracle RAC with Data Guard for high availability across two data centers. They implemented a disk-based backup solution with daily full backups and hourly log backups retained for 30 days. This simplified their backup infrastructure and improved performance, allowing 4TB backups to complete in 10 hours with 24x data deduplication.
Presentation backup and recovery best practices for very large databases (v...xKinAnx
This document provides best practices for backup and recovery of very large databases (VLDBs). It discusses VLDB trends requiring databases to scale to terabytes and beyond. The key is protecting growing data while maintaining cost efficiency. The presentation covers assessing recovery requirements, architecting backup environments, leveraging Oracle tools, planning data layout, developing backup procedures, and recovery strategies. It also provides a Starbucks case study example.
You most probably dont need an RMAN catalog databaseYury Velikanov
or 10 compelling reasons why you may need a catalog database (alternative title). The title of this session is on purpose thought provoking. The author is an experience Oracle DBA in Oracle backup & recovery area. During the presentation he will go through top reasons why you may need to implement RMAN catalog database and give you additional ideas on how you can improve your backups leveraging additional benefits provided by RMAN catalog database. The author will explain in what cases and why you may not need the catalog database. You will go away with a clear understanding on how to benefit from RMAN catalog database and when it may be optional. This is another presentation from author's popular RMAN papers.
What is new on 12c for Backup and Recovery? PresentationFrancisco Alvarez
Francisco Munoz Alvarez is an Oracle ACE Director and president of several Oracle user groups. He has many Oracle certifications and experience beta testing various Oracle products.
The presentation covers new features in Oracle Database 12c for backup and recovery including the multitenant container database, enhancements to RMAN and Data Pump, and changes to privileges for backups. It also discusses pluggable databases, container and PDB backup/restore, multisection backups, active duplicate, and SQL usage in RMAN.
This document provides an agenda and overview for a training session on Oracle Database backup and recovery. The agenda covers the purpose of backups and recovery, Oracle data protection solutions including Recovery Manager (RMAN) and flashback technologies, and the Data Recovery Advisor tool. It also discusses various types of data loss to protect against, backup strategies like incremental backups, and validating and recovering backups.
Oracle Database 12c offers new enhancements and additions in Recovery Manager (RMAN). The features listed in this article will help you transport data across platforms and reduce downtime by 8x versus tradition migration approach, recover table and table partitions to point-in-time without affecting other objects in the database, and audit RMAN-related events using unified auditing. Take advantage of these new features for efficient backup and recovery.
This document discusses user-managed database backup and recovery, including:
- The difference between user-managed and server-managed backup which uses OS commands versus RMAN.
- How to perform a complete database recovery by restoring files and archive logs and applying redo logs.
- How to perform incomplete recovery to recover to a past time or SCN by restoring files and applying redo logs until a specified point.
This document discusses Oracle database backup and recovery. It covers the need for backups, different types of backups including full, incremental, physical and logical. It describes user-managed backups and RMAN-managed backups. For recovery, it discusses restoring from backups and applying redo logs to recover the database to a point in time. Flashback recovery is also mentioned.
The document provides an overview of database backup, restore, and recovery. It describes various types of failures that may occur including statement failures, user process failures, instance failures, media failures, and user errors. It emphasizes the importance of defining a backup and recovery strategy that considers business requirements, operational requirements, technical considerations, and disaster recovery issues to minimize data loss and downtime in the event of failures.
Virtual private catalog will allow you to maintain only one recovery catalog repository by securing boundaries between administrators of various databases or between DBAs, as well as allowing you to separate their duties.
Join the Webinar to learn about Virtual Private Catalog and Demo.
Overview of RMAN
Overview of Recovery Catalog
About Virtual Private Catalog
Benefits of Virtual Private Catalog
Create Virtual Private Catalog
Manage Virtual Private Catalog
RMAN stored Script
Q& A
Yuri is called to audit RMAN backup scripts on regular basis for several years now as part of his Day to Day duties. He see the same errors in scripts that Oracle DBAs using to backup critical databases over and over again. Those errors may play a significant role in a recovery process when you working under stress. During that presentation you will be introduced to typical issues and hints how to address those.
RMAN in Oracle Database 12c provides several new features to enhance backup and recovery capabilities. These include support for pluggable database backups, using SQL statements directly in RMAN, separating DBA privileges for security, and enhancing active database duplication. RMAN also allows multisection backups of very large files and table recovery directly from RMAN backups.
Flashback Database allows rewinding a database to undo data corruptions or errors. It works by using redo logs and block images to restore the database to a previous state. Configuring Flashback Database requires enabling it, setting a retention target, and having the database in ARCHIVELOG mode. Operations include flashing back to a time, SCN, or restore point. Monitoring involves checking the flashback window and log sizes.
This document discusses diagnosing database issues and corruption. It covers the Data Recovery Advisor, which can detect, analyze, and repair failures. It also covers handling block corruption, setting up the Automatic Diagnostic Repository (ADR) to store diagnostic data, and using the Health Monitor to perform proactive database checks. Key topics include listing and advising on failures using RMAN, performing block media recovery, viewing ADR data with ADRCI, and running manual and automatic Health Monitor checks.
RMAN backup scripts should be improved in the following ways:
1. Log backups thoroughly and send failure alerts to ensure recoverability.
2. Avoid relying on a single backup and use redundancy to protect against data loss.
3. Back up control files last and do not delete archives until backups are complete.
4. Check backups regularly to ensure they meet recovery needs.
This document discusses configuring a database for recoverability. It covers placing a database in ARCHIVELOG mode, configuring multiple archive log destinations, configuring the Fast Recovery Area (FRA), and specifying retention policies. The key benefits of using the FRA are that it simplifies backup management and automatically manages disk space for recovery files.
RMAN is an Oracle tool that performs physical backups and recovery of Oracle databases. It can perform full backups as well as incremental backups. Incremental backups only back up changed blocks since the previous backup. RMAN also allows recovery of individual datafiles, tablespaces, or the entire database using backups. It facilitates various recovery scenarios including datafile recovery, tablespace recovery, and disaster recovery when all files are lost.
RMAN - New Features in Oracle 12c - IOUG Collaborate 2017Andy Colvin
Every DBA should know how to back up and recover a database - their job may depend on it one day. In order to make backup and recovery easier, Oracle gives DBAs RMAN. In Oracle 12c, RMAN includes many new features to make backup and recovery simpler and more robust. This session will cover 5 of the top new features introduced in RMAN for Oracle 12c, coming from more than four years of experience with the product. Discussion of each new feature will explain how it can be used by normal DBAs in their everyday work life - not just abstract discussions on features that will never actually be used in the real world.
This document discusses using a recovery catalog with RMAN for database backups and recovery. It covers:
1. The benefits of using a recovery catalog over just the control file, such as storing more historical data.
2. Creating a recovery catalog which involves configuring a catalog database, creating an owner, and generating the catalog.
3. Registering target databases with the catalog and maintaining the catalog's synchronization with database changes.
Duplicating a database creates an identical copy of a database that can be used for testing or recovery purposes. There are multiple techniques for duplicating a database using RMAN, including duplicating from an active database, from RMAN backups, with or without connections to the target instance, recovery catalog, or using backups alone. The key steps are preparing the auxiliary instance, ensuring backups and redo logs are available, allocating auxiliary channels, and using the RMAN DUPLICATE command to restore files and recover the database.
This document discusses using Oracle's Recovery Manager (RMAN) to perform various database recovery tasks, including recovering from the loss of data files, using incremental backups to reduce recovery time, switching to image copies for fast recovery, restoring a database to a new host, and performing disaster recovery. It provides examples of using RMAN commands like RESTORE, RECOVER, SWITCH, and SET NEWNAME to restore and recover database files from backups.
Reduce planned database down time with Oracle technologyKirill Loifman
How to design an Oracle database system to minimize planned interruptions? That depends on the requirements, goals, SLAs etc. The presentation will follow top-down approach. First we will describe major types of planned maintenance, prioritize those and then based on the system availability requirements find the best cost-effective technics to address those. A bit of planning, strategy and of course modern database and OS technics including latest Oracle 12c features.
This document discusses managing space for databases, including:
- Using 4KB sector disks and specifying disk sector sizes when creating databases, data files, and redo log files.
- Transporting tablespaces and databases between platforms using RMAN and Data Pump utilities.
- The process involves making tablespaces read-only, converting data files to the target platform format, importing metadata, and making tablespaces read/write on the target system.
This document provides an overview of Oracle database concepts and tools. It describes the core components of an Oracle database including the database, server processes, memory structures, and client/server architecture. It also outlines the tools used to configure an Oracle database such as the Oracle Universal Installer, Database Configuration Assistant, and command line utilities. Automatic Storage Management (ASM) is discussed as the preferred storage management solution.
Tablespace point-in-time recovery (TSPITR) allows recovery of one or more tablespaces to an earlier point in time without affecting other tablespaces. It performs restore and recovery of data files for the recovery set and auxiliary set to the target time, then exports and imports metadata to make the recovered tablespaces available. TSPITR is useful for undoing DML changes or recovering from logical corruption in a subset of the database, and can be fully automated using RMAN or performed with a custom auxiliary instance.
Oracle Recovery Manager (Oracle RMAN) has evolved since being released in version 8i. With the newest version of Oracle 12c , RMAN has great new features that will allow you to reduce your down time in case of a disaster. In this session you will learn about the new features that were introduced in Oracle 12c and how can you take advantage of them from the first day you upgrade to this version.
This document discusses monitoring and tuning RMAN backup and restore performance. It describes how to configure RMAN for asynchronous I/O and multiplexing, monitor job progress, identify bottlenecks, and balance backup speed versus recovery speed. Specific parameters like MAXPIECESIZE, FILESPERSET, and MAXOPENFILES are examined for their effect on performance.
This document discusses backup and recovery strategies for Oracle Exadata systems. It provides an overview of using Recovery Manager (RMAN) to manage backups and outlines several backup destination options for Exadata, including storing backups on Exadata storage, external disk storage like the ZFS Storage Appliance, or tape libraries. The document also reviews considerations for designing an Exadata backup and recovery solution, including sizing backups and choosing retention policies based on recovery time and data loss objectives.
This document discusses various methods for performing database backups, including Recovery Manager (RMAN), Oracle Secure Backup, and user-managed backups. It covers key backup concepts like full versus incremental backups, online versus offline backups, and image copies versus backup sets. The document also provides instructions on configuring backup settings and scheduling automated database backups using RMAN and Enterprise Manager.
This document discusses Oracle database backup and recovery. It covers the need for backups, different types of backups including full, incremental, physical and logical. It describes user-managed backups and RMAN-managed backups. For recovery, it discusses restoring from backups and applying redo logs to recover the database to a point in time. Flashback recovery is also mentioned.
The document provides an overview of database backup, restore, and recovery. It describes various types of failures that may occur including statement failures, user process failures, instance failures, media failures, and user errors. It emphasizes the importance of defining a backup and recovery strategy that considers business requirements, operational requirements, technical considerations, and disaster recovery issues to minimize data loss and downtime in the event of failures.
Virtual private catalog will allow you to maintain only one recovery catalog repository by securing boundaries between administrators of various databases or between DBAs, as well as allowing you to separate their duties.
Join the Webinar to learn about Virtual Private Catalog and Demo.
Overview of RMAN
Overview of Recovery Catalog
About Virtual Private Catalog
Benefits of Virtual Private Catalog
Create Virtual Private Catalog
Manage Virtual Private Catalog
RMAN stored Script
Q& A
Yuri is called to audit RMAN backup scripts on regular basis for several years now as part of his Day to Day duties. He see the same errors in scripts that Oracle DBAs using to backup critical databases over and over again. Those errors may play a significant role in a recovery process when you working under stress. During that presentation you will be introduced to typical issues and hints how to address those.
RMAN in Oracle Database 12c provides several new features to enhance backup and recovery capabilities. These include support for pluggable database backups, using SQL statements directly in RMAN, separating DBA privileges for security, and enhancing active database duplication. RMAN also allows multisection backups of very large files and table recovery directly from RMAN backups.
Flashback Database allows rewinding a database to undo data corruptions or errors. It works by using redo logs and block images to restore the database to a previous state. Configuring Flashback Database requires enabling it, setting a retention target, and having the database in ARCHIVELOG mode. Operations include flashing back to a time, SCN, or restore point. Monitoring involves checking the flashback window and log sizes.
This document discusses diagnosing database issues and corruption. It covers the Data Recovery Advisor, which can detect, analyze, and repair failures. It also covers handling block corruption, setting up the Automatic Diagnostic Repository (ADR) to store diagnostic data, and using the Health Monitor to perform proactive database checks. Key topics include listing and advising on failures using RMAN, performing block media recovery, viewing ADR data with ADRCI, and running manual and automatic Health Monitor checks.
RMAN backup scripts should be improved in the following ways:
1. Log backups thoroughly and send failure alerts to ensure recoverability.
2. Avoid relying on a single backup and use redundancy to protect against data loss.
3. Back up control files last and do not delete archives until backups are complete.
4. Check backups regularly to ensure they meet recovery needs.
This document discusses configuring a database for recoverability. It covers placing a database in ARCHIVELOG mode, configuring multiple archive log destinations, configuring the Fast Recovery Area (FRA), and specifying retention policies. The key benefits of using the FRA are that it simplifies backup management and automatically manages disk space for recovery files.
RMAN is an Oracle tool that performs physical backups and recovery of Oracle databases. It can perform full backups as well as incremental backups. Incremental backups only back up changed blocks since the previous backup. RMAN also allows recovery of individual datafiles, tablespaces, or the entire database using backups. It facilitates various recovery scenarios including datafile recovery, tablespace recovery, and disaster recovery when all files are lost.
RMAN - New Features in Oracle 12c - IOUG Collaborate 2017Andy Colvin
Every DBA should know how to back up and recover a database - their job may depend on it one day. In order to make backup and recovery easier, Oracle gives DBAs RMAN. In Oracle 12c, RMAN includes many new features to make backup and recovery simpler and more robust. This session will cover 5 of the top new features introduced in RMAN for Oracle 12c, coming from more than four years of experience with the product. Discussion of each new feature will explain how it can be used by normal DBAs in their everyday work life - not just abstract discussions on features that will never actually be used in the real world.
This document discusses using a recovery catalog with RMAN for database backups and recovery. It covers:
1. The benefits of using a recovery catalog over just the control file, such as storing more historical data.
2. Creating a recovery catalog which involves configuring a catalog database, creating an owner, and generating the catalog.
3. Registering target databases with the catalog and maintaining the catalog's synchronization with database changes.
Duplicating a database creates an identical copy of a database that can be used for testing or recovery purposes. There are multiple techniques for duplicating a database using RMAN, including duplicating from an active database, from RMAN backups, with or without connections to the target instance, recovery catalog, or using backups alone. The key steps are preparing the auxiliary instance, ensuring backups and redo logs are available, allocating auxiliary channels, and using the RMAN DUPLICATE command to restore files and recover the database.
This document discusses using Oracle's Recovery Manager (RMAN) to perform various database recovery tasks, including recovering from the loss of data files, using incremental backups to reduce recovery time, switching to image copies for fast recovery, restoring a database to a new host, and performing disaster recovery. It provides examples of using RMAN commands like RESTORE, RECOVER, SWITCH, and SET NEWNAME to restore and recover database files from backups.
Reduce planned database down time with Oracle technologyKirill Loifman
How to design an Oracle database system to minimize planned interruptions? That depends on the requirements, goals, SLAs etc. The presentation will follow top-down approach. First we will describe major types of planned maintenance, prioritize those and then based on the system availability requirements find the best cost-effective technics to address those. A bit of planning, strategy and of course modern database and OS technics including latest Oracle 12c features.
This document discusses managing space for databases, including:
- Using 4KB sector disks and specifying disk sector sizes when creating databases, data files, and redo log files.
- Transporting tablespaces and databases between platforms using RMAN and Data Pump utilities.
- The process involves making tablespaces read-only, converting data files to the target platform format, importing metadata, and making tablespaces read/write on the target system.
This document provides an overview of Oracle database concepts and tools. It describes the core components of an Oracle database including the database, server processes, memory structures, and client/server architecture. It also outlines the tools used to configure an Oracle database such as the Oracle Universal Installer, Database Configuration Assistant, and command line utilities. Automatic Storage Management (ASM) is discussed as the preferred storage management solution.
Tablespace point-in-time recovery (TSPITR) allows recovery of one or more tablespaces to an earlier point in time without affecting other tablespaces. It performs restore and recovery of data files for the recovery set and auxiliary set to the target time, then exports and imports metadata to make the recovered tablespaces available. TSPITR is useful for undoing DML changes or recovering from logical corruption in a subset of the database, and can be fully automated using RMAN or performed with a custom auxiliary instance.
Oracle Recovery Manager (Oracle RMAN) has evolved since being released in version 8i. With the newest version of Oracle 12c , RMAN has great new features that will allow you to reduce your down time in case of a disaster. In this session you will learn about the new features that were introduced in Oracle 12c and how can you take advantage of them from the first day you upgrade to this version.
This document discusses monitoring and tuning RMAN backup and restore performance. It describes how to configure RMAN for asynchronous I/O and multiplexing, monitor job progress, identify bottlenecks, and balance backup speed versus recovery speed. Specific parameters like MAXPIECESIZE, FILESPERSET, and MAXOPENFILES are examined for their effect on performance.
This document discusses backup and recovery strategies for Oracle Exadata systems. It provides an overview of using Recovery Manager (RMAN) to manage backups and outlines several backup destination options for Exadata, including storing backups on Exadata storage, external disk storage like the ZFS Storage Appliance, or tape libraries. The document also reviews considerations for designing an Exadata backup and recovery solution, including sizing backups and choosing retention policies based on recovery time and data loss objectives.
This document discusses various methods for performing database backups, including Recovery Manager (RMAN), Oracle Secure Backup, and user-managed backups. It covers key backup concepts like full versus incremental backups, online versus offline backups, and image copies versus backup sets. The document also provides instructions on configuring backup settings and scheduling automated database backups using RMAN and Enterprise Manager.
MySQL Enterprise Backup provides fast, consistent, online backups of MySQL databases. It allows for backing up InnoDB and MyISAM tables while the database is running, minimizing downtime. The tool takes physical backups of the data files rather than logical backups, allowing for very fast restore times compared to alternatives like mysqldump. It supports features like compressed backups, incremental backups, and point-in-time recovery.
High availability and disaster recovery in IBM PureApplication SystemScott Moonen
This document discusses high availability and disaster recovery strategies for IBM PureApplication System. It begins with definitions of key terms like HA, DR, RTO, and RPO. It then outlines the various tools in PureApplication System that can be used to achieve HA and DR, such as compute node availability, block storage, storage replication, and external storage. The document provides examples of how to compose these tools to meet different HA and DR scenarios, like handling compute node failures, database updates, and site failures. It concludes with some caveats around networking considerations and middleware-specific factors.
MySQL Enterprise Backup provides fast, consistent, online backups of MySQL databases. It allows for full and incremental backups, compressed backups to reduce storage needs, and point-in-time recovery. MySQL Enterprise Backup works by backing up InnoDB data files, copying and compressing the files, and backing up the transaction log files from the time period when the data files were copied. This allows for consistent backups and point-in-time recovery of the database.
Presentation on backup and recoveryyyyyyyyyyyyyyyyyyyTehmina Gulfam
The document provides an overview of backup strategies and technologies. It discusses different types of backups including full, differential, and incremental backups. It covers backup architecture including backup clients, servers, and storage nodes. Key aspects of the backup process and restore process are outlined. Different backup topologies of direct attached, LAN-based, and SAN-based backups are described. Options for backup technology include backing up to tape or disk. Features of Acronis backup software are briefly mentioned.
Presentation on backup and recoveryyyyyyyyyyyyyTehmina Gulfam
The document provides an overview of backup strategies and technologies. It discusses different types of backups including full, differential, and incremental backups. It covers backup architecture including backup clients, servers, and storage nodes. Key aspects of the backup process and restore process are outlined. Different backup topologies of direct attached, LAN-based, and SAN-based backups are described. Options for backup technology include backing up to tape or disk. Features of Acronis backup software are briefly mentioned.
• We sleeping well. And our mobile ringing and ringing. Message: DISASTER! In this session (on slides) we are NOT talk about potential disaster (such BCM); we talk about: And what NOW? New version old my old well-known session updated for whole changes which happened in DBA World in last two-three years.
• So, from the ground to the Sky and further - everything for surviving disaster. Which tasks should have been finished BEFORE. Is virtual or physical SQL matter? We talk about systems, databases, peoples, encryption, passwords, certificates and users.
• In this session (on few demos) I'll show which part of our SQL Server Environment are critical and how to be prepared to disaster. In some documents I'll show You how to be BEST prepared.
The document discusses best practices for preparing for and surviving a disaster involving IT systems. It emphasizes the importance of being prepared through thorough backup and recovery procedures. Key aspects of preparation include having documented procedures for backup and restore of SQL and SharePoint environments, understanding roles and responsibilities, maintaining service level agreements, keeping an encrypted envelope of credentials, and ensuring necessary hardware, software, and support contracts are accounted for. The overall message is that with proper planning through documented policies and procedures, the impact of a disaster can be minimized.
The document discusses best practices for preventing and recovering from disasters affecting IT systems. It emphasizes the importance of being prepared through regular backups, testing restores, clear documentation of backup and restore procedures, and defined roles and responsibilities. Key recommendations include performing backups to separate storage regularly; testing restores from backups; having a disaster recovery plan, procedures, and environment ready; and ensuring appropriate staff are assigned roles to respond to an outage. The overall message is that the best way to survive a disaster is through preparation, including backups, documentation, training and assigning roles.
RMAN is Oracle's backup and recovery tool that provides advantages like incremental backups, automatic block checking for corruption, and logging of all backup operations. It can be implemented in various ways such as backing up to disk without a media management layer for simplicity or backing up to tape with a media management layer. A recovery catalog database is optional but provides benefits like storing metadata for long periods of time. New features in Oracle9i include optimized restores, block-level recovery, and simplified syntax.
Session from SQLDay 2016 Conference in Wroclaw.
2 AM. We're sleeping well and our mobile is ringing and ringing. Message: DISASTER! In this session (on slides) we are NOT talking about the potential disaster (such BCM); we talk about: What happened NOW? Which tasks should have been finished BEFORE. Does virtual or physical SQL Server matter? We talk about systems, databases, people, encryption, passwords, certificates and users. In this session (on few demos) I'll show which part of our SQL Server environment are critical and how to be prepared for disaster. In some documents, I'll show You how to be BEST prepared.
The document discusses database recovery techniques, including:
- Recovery algorithms ensure transaction atomicity and durability despite failures by undoing uncommitted transactions and ensuring committed transactions survive failures.
- Main recovery techniques are log-based using write-ahead logging (WAL) and shadow paging. WAL protocol requires log records be forced to disk before related data updates.
- Recovery restores the database to the most recent consistent state before failure. This may involve restoring from a backup and reapplying log entries, or undoing and reapplying operations to restore consistency.
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact [email protected]
mob. no-7023419969
This document discusses how to configure Oracle database backup settings using Recovery Manager (RMAN). It covers setting persistent RMAN configuration settings, enabling automatic control file backups, configuring backup destinations and channels, optimizing backups, and creating compressed or encrypted backups. Key topics include using the CONFIGURE command to set backup retention policies, backup copy settings, and backup optimization parameters, as well as allocating channels and specifying backup device types and locations.
This document discusses best practices for preparing for and responding to a disaster involving critical IT systems like servers and databases. It emphasizes the importance of regular backups, having recovery procedures documented, testing restores, and defining roles and responsibilities of team members. It provides guidance on backup strategies for SQL Server and SharePoint, including using different types of backups, storing backups offline, and setting backup schedules. It also stresses the value of preparation, being ready to restore from backups, and having contact information and credentials documented in advance in case of an emergency.
Engage for success ibm spectrum accelerate 2xKinAnx
IBM Spectrum Accelerate is software that extends the capabilities of IBM's XIV storage system, such as consistent performance tuning-free, to new delivery models. It provides enterprise storage capabilities deployed in minutes instead of months. Spectrum Accelerate runs the proven XIV software on commodity x86 servers and storage, providing similar features and functions to an XIV system. It offers benefits like business agility, flexibility, simplified acquisition and deployment, and lower administration and training costs.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep divexKinAnx
The document provides an overview of IBM Spectrum Virtualize HyperSwap functionality. HyperSwap allows host I/O to continue accessing volumes across two sites without interruption if one site fails. It uses synchronous remote copy between two I/O groups to make volumes accessible across both groups. The document outlines the steps to configure a HyperSwap configuration, including naming sites, assigning nodes and hosts to sites, and defining the topology.
Software defined storage provisioning using ibm smart cloudxKinAnx
This document provides an overview of software-defined storage provisioning using IBM SmartCloud Virtual Storage Center (VSC). It discusses the typical challenges with manual storage provisioning, and how VSC addresses those challenges through automation. VSC's storage provisioning involves three phases - setup, planning, and execution. The setup phase involves adding storage devices, servers, and defining service classes. In the planning phase, VSC creates a provisioning plan based on the request. In the execution phase, the plan is run to automatically complete all configuration steps. The document highlights how VSC optimizes placement and streamlines the provisioning process.
This document discusses IBM Spectrum Virtualize 101 and IBM Spectrum Storage solutions. It provides an overview of software defined storage and IBM Spectrum Virtualize, describing how it achieves storage virtualization and mobility. It also provides details on the new IBM Spectrum Virtualize DH8 hardware platform, including its performance improvements over previous platforms and support for compression acceleration.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...xKinAnx
HyperSwap provides high availability by allowing volumes to be accessible across two IBM Spectrum Virtualize systems in a clustered configuration. It uses synchronous remote copy to replicate primary and secondary volumes between the two systems, making the volumes appear as a single object to hosts. This allows host I/O to continue if an entire system fails without any data loss. The configuration requires a quorum disk in a third site for the cluster to maintain coordination and survive failures across the two main sites.
IBM Spectrum Protect (formerly IBM Tivoli Storage Manager) provides data protection and recovery for hybrid cloud environments. This document summarizes a presentation on IBM's strategic direction for Spectrum Protect, including plans to enhance the product to better support hybrid cloud, virtual environments, large-scale deduplication, simplified management, and protection for key workloads. The presentation outlines roadmap features for 2015 and potential future enhancements.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...xKinAnx
IBM Spectrum Scale can help achieve ILM efficiencies through policy-driven, automated tiered storage management. The ILM toolkit manages file sets and storage pools and automates data management. Storage pools group similar disks and classify storage within a file system. File placement and management policies determine file placement and movement based on rules.
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
The document provides an overview of IBM Spectrum Scale Active File Management (AFM). AFM allows data to be accessed globally across multiple clusters as if it were local by automatically managing asynchronous replication. It describes the various AFM modes including read-only caching, single-writer, and independent writer. It also covers topics like pre-fetching data, cache eviction, cache states, expiration of stale data, and the types of data transferred between home and cache sites.
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
This document provides information about replication and stretch clusters in IBM Spectrum Scale. It defines replication as synchronously copying file system data across failure groups for redundancy. While replication improves availability, it reduces performance and increases storage usage. Stretch clusters combine two or more clusters to create a single large cluster, typically using replication between sites. Replication policies and failure group configuration are important to ensure effective data duplication.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...xKinAnx
This document provides an overview of managing Spectrum Scale opportunity discovery and working with external resources to be successful. It discusses how to build presentations and configurations to address technical and philosophical solution requirements. The document introduces IBM Spectrum Scale as providing low latency global data access, linear scalability, and enterprise storage services on standard hardware for on-premise or cloud deployments. It also discusses Spectrum Scale and Elastic Storage Server, noting the latter is a hardware building block with GPFS 4.1 installed. The document provides tips for discovering opportunities through RFPs, RFIs, events, workshops, and engaging clients to understand their needs in order to build compelling proposal information.
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...xKinAnx
This document provides guidance on sizing and configuring Spectrum Scale and Elastic Storage Server solutions. It discusses collecting information from clients such as use cases, workload characteristics, capacity and performance goals, and infrastructure requirements. It then describes using tools to help architect solutions that meet the client's needs, such as breaking the problem down, addressing redundancy and high availability, and accounting for different sites, tiers, clients and protocols. The document also provides tips for working with the configuration tool and pricing the solution appropriately.
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...xKinAnx
The document provides an overview of key concepts covered in a GPFS 4.1 system administration course, including backups using mmbackup, SOBAR integration, snapshots, quotas, clones, and extended attributes. The document includes examples of commands and procedures for administering these GPFS functions.
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
This document provides an overview of Spectrum Scale 4.1 system administration. It describes the Elastic Storage Server options and components, Spectrum Scale native RAID (GNR), and tips for best practices. GNR implements sophisticated data placement and error correction algorithms using software RAID to provide high reliability and performance without additional hardware. It features auto-rebalancing, low rebuild overhead through declustering, and end-to-end data checksumming.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
Presentation recovery manager (rman) configuration and performance tuning best practices
1. <Insert Picture Here>
Recovery Manager (RMAN) Configuration and Performance Tuning
Best Practices
Timothy Chien
Principal Product Manager
Oracle America
Greg Green
Senior Database Administrator
Starbucks Coffee Company
4. 4
Oracle Products Available Online
Oracle Store
Buy Oracle license and support
online today at
oracle.com/store
5. 5
<Insert Picture Here>
Agenda
• Recovery Manager Overview
• Configuration Best Practices
– Backup Strategies Comparison
– Fast Recovery Area (FRA)
• Performance Tuning Methodology
– Backup Data Flow
– Tuning Principles
– Diagnosing Performance Bottlenecks
• Starbucks Case Study
• Summary/Q&A
6. 6
Oracle Recovery Manager (RMAN)
Oracle-integrated Backup & Recovery Engine
Oracle Enterprise
Manager
RMAN
Database
Fast Recovery
Area
Tape Drive
Oracle Secure
Backup*
•Intrinsic knowledge of database
file formats and recovery
procedures
• Block validation
• Online block-level recovery
• Tablespace/data file recovery
• Online, multi-streamed backup
• Unused block compression
• Native encryption
•Integrated disk, tape & cloud
backup leveraging the Fast
Recovery Area (FRA) and
Oracle Secure BackupCloud
*RMAN also supports leading 3rd party media managers
7. 7
Most Critical Question To Ask First..
• What are my recovery requirements?
– Assess tolerance for data loss - Recovery Point Objective (RPO)
• How frequently should backups be taken?
• Is point-in-time recovery required?
– Assess tolerance for downtime - Recovery Time Objective (RTO)
• Downtime: Problem identification + recovery planning +
systems recovery
• Tiered RTO per level of granularity, e.g. database,
tablespace, table, row
– Determine backup retention policy
• Onsite, offsite, long-term
• Then..how does my RMAN backup strategy fulfill those
requirements?
8. 8
Option 1: Full & Incremental Tape Backups
• Well-suited for:
– Databases that can tolerate hours/days RTO
– Environments where disk is premium
– Low-medium change frequency between backups, e.g. < 20%
• Backup strategy:
– Weekly level 0 and daily ‘differential’ incremental backup sets to
tape, with optional backup compression
– Enable block change tracking - only changed blocks are read
and written during incremental backup
– Archived logs are backed up and retained on-disk, as needed
Level 0 (full) Level 1 (incremental)
….
Archived Logs Archived Logs
9. 9
Script Example
• Configure SBT (i.e. tape) channels:
– CONFIGURE CHANNEL DEVICE TYPE SBT PARMS
'<channel parameters>';
• Weekly full backup:
– BACKUP AS BACKUPSET INCREMENTAL LEVEL 0
DATABASE PLUS ARCHIVELOG;
• Daily incremental backup:
– BACKUP AS BACKUPSET INCREMENTAL LEVEL 1
DATABASE PLUS ARCHIVELOG;
10. 10
Option 2: Incrementally Updated Disk Backups
• Well-suited for:
– Databases that can tolerate no more than a few hours RTO
– Environments where disk can be allocated for 1X size of
database or most critical tablespaces
• Backup strategy:
– Initial image copy to FRA, followed by daily incremental backups
– Roll forward copy with incremental, to produce new on-disk copy
– Full backup archived to tape, as needed
– Archived logs are backed up and retained on-disk, as needed
– Fast recovery from disk or SWITCH to use image copies
Level 0 (full) +
archive to tape
Level 1
….
Roll forward image copy
+ Level 1
Archived Logs Archived Logs Archived Logs
11. 11
Script Example
• Configure SBT channels, if needed:
– [CONFIGURE CHANNEL DEVICE TYPE SBT PARMS
'<channel parameters>';]
• Daily roll forward copy and incremental backup:
– RECOVER COPY OF DATABASE WITH TAG 'OSS';
– BACKUP DEVICE TYPE DISK INCREMENTAL LEVEL 1
FOR RECOVER OF COPY WITH TAG 'OSS' DATABASE;
– [BACKUP DEVICE TYPE SBT ARCHIVELOG ALL;]
• What happens?
– First run: Image copy
– Second run: Incremental backup
– Third run+: Roll forward copy & create new incremental backup
• Backup FRA to tape, if needed:
– [BACKUP RECOVERY AREA;]
13. 13
Option 3: Offload Backups to Physical Standby
Database in Data Guard Environment
• Well-suited for:
– Databases that require no more than several minutes of recovery
time, in event of any failure
– Environments that can preferably allocate symmetric hardware
and storage for physical standby database
– Environments whose tape infrastructure can be shared between
primary and standby database sites
• Backup strategy:
– Full and incremental backups offloaded to physical standby
database
– Fast incremental backup on standby with Active Data Guard
– Backups can be restored to primary or standby database
• Backups can be taken at each database for optimal local
protection
14. 14
Backup Strategies Comparison
Strategy Backup Factors Recovery Factors
Option 1: Full &
Incremental Tape
Backups
•Fast incrementals
•Save space with backup
compression
•Cost-effective tape
storage
•Full backup restored first,
then incrementals &
archived logs
•Tape backups read
sequentially
Option 2: Incrementally
Updated Disk Backups
•Incremental + roll forward
to create up-to-date copy
•Requires 1X production
storage for copy
•Optional tape storage
•Backups read via random
access
•Restore-free recovery with
SWITCH command
Option 3: Offload
Backups to Physical
Standby Database
•Above benefits +
primary database free to
handle more workloads
•Requires 1X production
hardware and storage for
standby database
•Fast failover to standby
database in event of any
failure
•Backups are last resort, in
event of double site failure
15. 15
Fast Recovery Area (FRA) Sizing
• If you want to keep:
– Control file backups and archived logs
• Estimate total size of all archived logs generated between
successive backups on the busiest days x 2 (in case of
unexpected redo spikes)
– Flashback logs
• Add in {Redo rate x Flashback retention target time x 2}
– Incremental backups
• Add in their estimated sizes
– On-disk image copy
• Add in size of the database minus size of temporary files
– Further details:
• http://download.oracle.com/docs/cd/E11882_01/backup.112/e106
42/rcmconfb.htm#i1019211
16. 16
FRA File Retention and Deletion
• When FRA space needs exceed quota, automatic file deletion occurs
in the following order:
1. Flashback logs
• Oldest Flashback time can be affected (with exception of guaranteed restore
points)
2. RMAN backup pieces/copies and archived redo logs that are:
• Not needed to maintain RMAN retention policy, or
• Have been backed up to tape (via DEVICE TYPE SBT) or secondary disk location
(via BACKUP RECOVERY AREA TO DESTINATION ‘..’)
• If archived log deletion policy is configured as:
– APPLIED ON [ALL] STANDBY
• Archived log must have been applied to mandatory or all standby databases
– SHIPPED TO [ALL] STANDBY
• Archived log must have been transferred to mandatory or all standby databases
– BACKED UP <N> TIMES TO DEVICE TYPE [DISK | SBT]
• Archived log must have been backed up at least <N> times
– If [APPLIED or SHIPPED] and BACKED UP policies are configured, both conditions
must be satisfied for an archived log to be considered for deletion.
19. 19
RMAN Backup Data Flow
A. Prepare backup tasks & read blocks into input buffers
B. Validate blocks & copy them to output buffers
– Compress and/or encrypt data if requested
C. Write output buffers to storage media (DISK or SBT)
– Media manager handles writing of output buffers to SBT
A. Prepare backup tasks & read blocks into input buffers
Write to storage media
Output I/O Buffer
Restore is inverse
of data flow..
20. 20
Tuning Principles
1. Determine the maximum input disk, output media, and network
throughput
– E.g. Oracle ORION – downloadable from OTN, DD command
– Evaluate network throughput at all touch points, e.g. database
server->media management environment->tape system
2. Configure disk subsystem for optimal performance
– Use ASM
• Configure external redundancy & leverage hardware RAID
• If disks will be shared for DATA and FRA disk groups:
– Provision the outer sectors to DATA for higher
performance
– Provision inner sectors to FRA, which has lower
performance, but suitable for sequential write activity
(e.g. backups)
• Otherwise, separate DATA and FRA disks
– If not using ASM, stripe data files across all disks with 1 MB
stripe size.
21. 21
Tuning Principles
3. Tune RMAN to fully utilize disk subsystem and tape
– Use asynchronous I/O
• For disk backup:
– If the system does not support native asynchronous
I/O, set DBWR_IO_SLAVES.
• Four slave processes allocated per session
• For tape backup:
– Set BACKUP_TAPE_IO_SLAVES, unless media
manager states otherwise.
• One slave process allocated per channel process
22. 22
Tuning Principles
3. Tune RMAN to fully utilize disk subsystem and tape
– For backups to disk, allocate as many channels as can be handled by
the system.
• For image copies, one channel processes one data file at a time.
– For backups to tape, allocate one channel per tape drive.
• “But allocating # of channels greater than # of tape drives
increases backup performance..so that’s a good thing, right?”
– No..restore time can be degraded due to tape-side
multiplexing
• If BACKUP VALIDATE duration (i.e. read phase) where:
– Time {channels = tape drives} ~= Time {channels > tape
drives}
• Bottleneck is most likely in media manager.
• Discussed later in ‘Diagnosing Performance Bottlenecks’
– Time {channels = tape drives} >> Time {channels > tape
drives}
• Tune read phase (discussed next)
23. 23
Read Phase - RMAN Multiplexing
• Multiplexing level: maximum number of files read by
one channel, at any time, during backup
– Min(MAXOPENFILES, FILESPERSET)
– MAXOPENFILES default = 8
– FILESPERSET default = 64
• Larger vs smaller backup set trade-offs
– Restore performance
• All data files vs. single data file
– Backup restartability
• MAXOPENFILES determines number and size of input
buffers
– Number and size of input buffers in
V$BACKUP_ASYNC_IO/V$BACKUP_SYNC_IO
– All buffers allocated from PGA, unless disk or tape
I/O slaves are enabled (SGA by default or
LARGE_POOL, if set)
24. 24
Read Phase - RMAN Input Buffers
• MAXOPENFILES ≤ 4
– Each buffer = 1MB, total buffer size for channel is up to 16MB
• MAXOPENFILES=1 => 16 buffers/file, 1 MB/buffer = 16 MB/file
– Optimal for ASM or striped system
• 4 < MAXOPENFILES ≤ 8
– Each buffer = 512KB, total buffer size for channel is up to 16MB.
Number of buffers per file will depend on number of files.
• MAXOPENFILES=8 => 4 buffers/file, 512 KB/buffer = 2 MB/file
– Optimal for non-striped system
– Reduce the number of input buffers/file to more effectively
spread out I/O usage (since each file resides on one disk)
• MAXOPENFILES > 8
– Each buffer = 128KB, 4 buffers per file, so each file will have 512KB
buffer
25. 25
Tuning Principles
4. If BACKUP VALIDATE still does not utilize available disk I/O & there is
available CPU and memory:
– Increase RMAN buffer memory usage
• With Oracle Database 11g Release 11.1.0.7 or lower versions -
• Set _BACKUP_KSFQ_BUFCNT (default 16) = # of input disks
– Number of input buffers per file allocated
– Achieve balance between memory usage and I/O
• E.g. Setting to 500 for 500 input disks may exceed tolerable memory
consumption
• Set _BACKUP_KSFQ_BUFSZ (default 1048576) = stripe size (in bytes)
• With Oracle Database 11g Release 2 -
• Set _BACKUP_FILE_BUFCNT,_BACKUP_FILE_BUFSZ
• Restore performance can increase with setting these parameters, as
output buffers used during restore will also increase correspondingly
• Refer to Support Note 1072545.1 for more details
• Note: With Oracle Database 11g Release 2 & ASM, all buffers are
automatically sized for optimal performance
26. 26
Backup Data Flow
A. Prepare backup tasks & read blocks into input buffers
B. Validate blocks & copy them to output buffers
– Compress and/or encrypt data if requested
C. Write output buffers to storage media (DISK or SBT)
– Media manager handles writing of output buffers to SBT
Write to storage media
Output I/O Buffer
27. 27
Tuning Principles
5. RMAN backup compression & encryption guidelines
– Both operations depend heavily on CPU resources
– Increase CPU resources or use LOW/MEDIUM setting
– Verify that uncompressed backup performance scales properly,
as channels are added
– Note - if data is encrypted with:
• TDE column encryption
– For encrypted backup, data is double encrypted (i.e.
encrypted columns treated as if they were not encrypted)
• TDE tablespace encryption
– For compressed & encrypted backup, encrypted
tablespaces are decrypted, compressed, then re-encrypted
– If only encrypted backup, encrypted blocks pass through
backup unchanged
28. 28
Tuning Principles
6. Tune RMAN output buffer size
– Output buffers => blocks written to DISK as copies or backup pieces
or to SBT as backup pieces
– Four buffers allocated per channel
– Default buffer sizes
• DISK: 1 MB
• SBT: 256 KB
– Adjust with BLKSIZE channel parameter
– Set BLKSIZE >= media management client buffer size
– No changes needed for Oracle Secure Backup
• Output buffer count & size for disk backup can be manually adjusted
– Details in Support Note 1072545.1
– Note: With Oracle Database 11g Release 2 & ASM, all
buffers are automatically sized for optimal performance
30. 30
Diagnosing Performance Bottlenecks – Pt. 1
• Query EFFECTIVE_BYTES_PER_SECOND column (EBPS) for
‘AGGREGATE’row in V$BACKUP_ASYNC_IO or V$BACKUP_SYNC_IO
– If EBPS < storage media throughput, run BACKUP VALIDATE
• Case 1: BACKUP VALIDATE time ~= actual backup time, then
read phase is the likely bottleneck.
– Refer to RMAN multiplexing and buffer usage guidelines
– Investigate ‘slow’ performing files
• Find data file with highest (LONG_WAITS/IO_COUNT)ratio
• If ASM, add disk spindles and/or re-balance disks
• Move file to new disk or multiplex with another ‘slow’ file
31. 31
Diagnosing Performance Bottlenecks – Pt. 2
• Case 2: BACKUP VALIDATE time << actual backup time, then
buffer copy or write to storage media phase is the likely
bottleneck.
– Refer to backup compression and encryption guidelines
– If tape backup, check media management (MML) settings:
• TCP/IP buffer size
• Media management client/server buffer size
• Client/socket timeout
• Media server hardware, connectivity to tape
• Enable tape compression (but not RMAN compression)
32. 32
Restore & Recovery Performance Best Practices
• Minimize archive log application by using incremental backups
• Use block media recovery for isolated block corruptions
• Keep adequate number of archived logs on disk
• Increase RMAN buffer memory usage
• Tune database for I/O, DBWR performance, CPU utilization
• Refer to MAA Media Recovery Best Practices paper
– Active Data Guard 11g Best Practices (includes best practices for
Redo Apply)
38. 38
The Starbucks of Today
Licensed Stores:
Grocery stores,
Borders Book stores,
airports, convention centers
Foodservice:
“We Proudly Brew,”
Serving coffee through
hotels, colleges,
hospitals, airlines
Company-operated
stores in the U.S.
and International
39. 39
EDW - Who it Supports
• Production EDW supports Starbucks internal
business users
• 10 TB VLDB warehouse, growing 1-2 TB per year
• Provides reports to the store level – sales, staffing, etc.
• Thousands of stores directly access the EDW
• Web-based dashboard reports via company intranet
• Monday Morning Mayhem
• Front-end reporting with Microstrategy
• Leveraging Ascential DataStage ETL Tool
• Toad, SQL Developer, and other ad-hoc tools used by
developers and QA
• And Much, Much, More…..
41. 41
Starbucks Enterprise Data Warehouse
(EDW) Backup and Recovery Tuning
• Starbucks Background and EDW Architecture
• EDW Backup and Recovery Strategy
• Issues/Challenges with Tape Backups
• Course of Action to Resolve Tape Backup
Performance Issue
42. 42
Backup Strategy
• RPO – Anytime within the last 24 hours, Backup window of 24 hours
• RMAN Incrementally Updated Backup Strategy
• Disk - Flash Recovery Area (FRA)
• Daily Incremental update of image copy with ‘SYSDATE – 1’
• Daily Level 1 Differential Incremental Backups
• Daily Script:
{ RECOVER COPY OF DATABASE WITH TAG
'WEEKLY_FULL_BKUP'
UNTIL TIME 'SYSDATE - 1';
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF
COPY WITH TAG WEEKLY_FULL_BKUP DATABASE;
BACKUP AS BACKUPSET ARCHIVELOG ALL NOT
BACKED UP DELETE ALL INPUT;
DELETE NOPROMPT OBSOLETE RECOVERY WINDOW OF
1 DAYS DEVICE TYPE DISK; }
• Tape
• Weekly: BACKUP RECOVERY AREA
• Each day, for rest of the week: BACKUP BACKUPSET ALL
43. 43
Backup Performance to FRA
• Daily Incremental Update + Incremental Backup
• 1 hr 45 minutes -> 2 hrs 30 minutes depending upon workload
• 60-75 minutes for RECOVER COPY OF DATABASE ..
• 30-45 minutes for incremental backup set creation + time to
purge old backup pieces
• The backup set is typically 250-350 GB but can vary depending
on the workload
• 4 RMAN channels to disk running on single RAC node
44. 44
Backup Performance to Tape
• Daily Backup of Backup Sets to Tape
• Using 2 channels on 1 node takes 60-90 minutes (some
concern here with speed)
• Weekly Backup of Recovery Area to Tape
• With 4 channels (2 channels per node) backing up 10.5 TB in
FRA, backup duration can be highly variable.
• Backup will sometimes run in 15-16 hours and other times
30+ hours!
• Why the wide variance?
• But first, what is expected backup rate?
45. 45
What is Expected Backup Rate?
• LTO-2 tape drive can backup at roughly 70 MB/sec compressed
(or better)
• 4 drives x 70 MB = 280 MB/sec (1 TB/hr)
• Is the tape rate supported by FRA disk?
• RMAN – BACKUP VALIDATE DATAFILECOPY ALL
• Observed rate (read phase) > 1 TB/hr
• What is the effect of GigE connection to media server?
• Maximum theoretical speed is 128 MB/sec
• With overhead, ~115 MB/sec per node
• Maximum rate from 2 nodes is 230 MB/sec (828 GB/hr)
• Observed rate is more like 180 MB/sec (650 GB/hr)
• Conclusion: GigE throttles overall backup rate
• FRA backup time = 10.5 TB / 650 GB/hr = ~16 hrs
• Something else going on with backup time variance..
46. 46
Why So Much Variance in FRA Backup Time?
• Three Problem Areas Identified
• Link Aggregation on the Media Server
• Spent a lot of time making sure this was working
• Network Load Balancing from Network Switch
• On occasion, 3 out of 4 RMAN channels jumped on one
port of Network Interface Card (NIC)
• Processor Architecture on Media Server
• T2000 Chip – 1 chip x 4 cores x 4 threads
• Requires setting interrupts to load balance across the 4
cores
• One core completely pegged during tests
47. 47
Starbucks Enterprise Data Warehouse
(EDW) Backup and Recovery Tuning
• Starbucks Background and EDW Architecture
• EDW Backup and Recovery Strategy
• Issues/Challenges with Tape Backups
• Course of Action to Resolve Tape Backup
Performance Issue
48. 48
Tuning Objective
• Decrease Variance in Backup Time
• Increase Backup Throughput for Future Growth
• EDW capacity increasing from 12->17 TB over next month
• Backup window still 24 hours
• Current 720 MB/s throughput will overrun window at 17 TB
• Desired throughput is ~ 1 TB/hr to accommodate growth &
meet backup window
• Simplify Backup Hardware Architecture
49. 49
Proposed Solution 1 -
Eliminate Separate Media Server &
Install Media Server on 2 RAC Nodes
• Benefits
• Reduces
Backup
Complexity
• Eliminates
1 GigE Network
Bottleneck
• Eliminates
Network Load
Balancing
Issues
• Easier to
Monitor
51. 51
What is New Theoretical Bottleneck?
• LTO-3 tape drive backs up at ~140 MB/s compressed (or better)
• 2 drives (1 drive / node) x 140 MB/sec = 280 MB/s (1 TB/hr)
• Is tape speed supported by FRA disk?
• RMAN - BACKUP VALIDATE DATAFILECOPY ALL
• Observed rate > 1 TB/hr (with 4 RMAN channels)
• Is tape speed limited by connection over fiber?
• Each Node has 4 x 2 Gb Fiber Connections with EMC PowerPath
Multipathing software
• Storage Engineer – “1.37 GB/Sec max rate for cluster.”
• Two tape drives - 280 MB/s out of 1.37 GB/s
• 20% of available I/O capacity utilization
• FRA backup time: 10.5 TB / 1 TB/hr = 10.5 hrs
• 35% performance improvement vs. today (16 hrs)
52. 52
Finally – Some Real RMAN Tuning
• Tests were conducted with running a BACKUP VALIDATE
DATAFILECOPY ALL command with 2 channels
• Test 1 – 2 channels on 1 node
• Test 2 - 2 channels on 2 nodes (1 channel/node)
• FRA disk group is comprised of 72 – 193 GB LUNs
• _BACKUP_KSFQ_BUFCNT = 16 (default) => 200 MB/s (720 GB/hr)
= 32 => 250 MB/s (900 GB/hr)
= 64 => 300 MB/s (1 TB/hr)
• 50% read rate improvement when correctly tuned
• Yes, I can fully drive 2 LTO-3s with 2 channels, based on
BACKUP VALIDATE testing
53. 53
Test 1 – 1 Node with 2 Channels
• Test _BACKUP_KSFQ_BUFCNT = 16, 32, 64
54. 54
Test 2 – 2 Channels with 1 Channel per Node
Node 1 - _BACKUP_KSFQ_BUFCNT = 16, 32, 64
Node 2 - _BACKUP_KSFQ_BUFCNT = 16, 32, 64
55. 55
Initial Results of Tape Backup Testing
Media Server Installed on RAC Nodes
•1 channel per node (2 channels total) + 2 LTO-3 Drives
•Observed backup rate of 200 MB/s (720 GB/hr) vs.
theoretical 280 MB/s (1 TB/hr with 2 x 140 MB/s for LTO-3)
•Recall: RMAN VALIDATE (read rate) > 1 TB/hr, so RMAN not
bottleneck
•Other possible factors:
• Database compression – Yes, but can’t account for all of the
lower backup rates
• Tuning – Additional performance might be gained by tuning
media server parameters
• Hardware Setup – HBA ports configuration or how tapes are
zoned to the servers
56. 56
After Rezoning Tape Drives to HBAs
2 Channels with 1 Channel per Node
• Node 1 ~ 145 MB/s
• Node 2 ~ 120 MB/s
• 33% improvement after rezoning
Node 1 Backup Throughput:
57. 57
Four Channels with 2 Channels per Node
Achieved Backup Rate ~ 1.6 TB/Hour
Node 1 Backup Throughput ~240 MB/s:
Node 2 Backup Throughput ~200 MB/s (due to other high query activity)
58. 58
Summary
• Starbucks Background and EDW Architecture
• EDW Backup and Recovery Strategy
• Issues/Challenges with Tape Backups
• Identify the bottlenecks in your system and know your
theoretical backup speed
• Course of Action to Resolve Tape Backup
Performance Issue
• Re-architect if bottleneck is hardware related
• Tune RMAN parameters to get the most out of your backup
hardware
• 50% increase in RMAN read performance was achieved by
tuning _BACKUP_KSFQ_BUFCNT
• RMAN should never be the bottleneck
• Keep tuning as new bottlenecks are discovered..
60. 60
Summary
• Recovery & business requirements drive the design of backup / data
protection strategy
– Disk and/or tape, offload to Data Guard?
• RMAN performance tuning is all about answering the question:
– What is my bottleneck? (then removing it)
• Determine maximum throughput/ceiling of each backup phase
– Read blocks into input buffers (memory, disk I/O)
– Copy to output buffers (CPU, esp. compression and/or encryption)
– Write to storage media (memory, disk/tape I/O, media management/HW
configuration)
• Get knowledgeable with media management and tape configuration
– A smarter DBA = smarter case to make with the SA!
61. 61
RMAN Trivia Time..
1. In which Oracle release did RMAN first appear?
2. In which Oracle release did the multi-section backup
feature first appear?
3. What is the negative effect of RMAN + tape-side
multiplexing?
4. Which view reports throughput and memory buffer
usage during backup?
5. How does Oracle Database 11g Release 2 RMAN with
ASM behave differently in memory buffer allocation
versus older releases?
62. 62
Key HA Sessions, Labs, & Demos by Oracle Development
Monday, 20 Sep – Moscone South *
3:30p Extreme Consolidation with RAC One Node, Rm 308
4:00p Edition-Based Redefinition, Hotel Nikko, Monterey I / II
5:00p Five Key HA Innovations, Rm 103
5:00p GoldenGate Strategy & Roadmap, Moscone West, Rm 3020
Tuesday, 21 Sep – Moscone South *
11:00a App Failover with Data Guard, Rm 300
12:30p Oracle Data Centers & Oracle Secure Backup, Rm 300
2:00p ASM Cluster File System, Rm 308
2:00p Exadata: OLTP, Warehousing, Consolidation, Rm 103
3:30p Deep Dive into OLTP Table Compression, Rm 104
3:30p MAA for E-Business Suite R12.1, Moscone West, Rm 2020
5:00p Instant DR by Deploying on Amazon Cloud, Rm 300
Wednesday, 22 Sep – Moscone South *
11:30a RMAN Best Practices, Rm 103
11:30a Database & Exadata Smart Flash Cache, Rm 307
11:30a Configure Oracle Grid Infrastructure, Rm 308
1:00p Top HA Best Practices, Rm 103
1:00p Exadata Backup/Recovery Best Practices, Rm 307
4:45p GoldenGate Architecture, Hotel Nikko, Peninsula
Thursday, 23 Sep – Moscone South *
10:30a Active Data Guard Under the Hood, Rm 103
1:30p Minimal Downtime Upgrades, Rm 306
3:00p DR for Database Machine, Rm 103
Hands-on Labs Marriott Marquis, Salon 10 / 11
Monday, Sep 20, 12:30 pm - 1:30 pm Oracle Active Data Guard
Tuesday, Sep 21, 5:00 pm - 6:00 pm Oracle Active Data Guard
Demos Moscone West DEMOGrounds
Mon & Tue 9:45a - 5:30p; Wed 9:00a - 4:00p
Maximum Availability Architecture (MAA)
Oracle Active Data Guard
Oracle Secure Backup
Oracle Recovery Manager & Flashback
Oracle GoldenGate
Oracle Real Application Clusters
Oracle Automatic Storage Management
* All session rooms are at Moscone South unless otherwise noted
* After Oracle OpenWorld, visit
http://www.oracle.com/goto/availability