0% found this document useful (0 votes)
6 views

Performance Tuning For Optimal Backup

The document discusses performance tuning for the optimal backup process on database servers using Oracle Recovery Manager (RMAN). It examines different RMAN parameter tunings like compression method and level to balance backup size, speed and CPU usage. A case study on a large government company in Indonesia is used to test various backup strategies and suggest the most suitable approach.

Uploaded by

waveneo
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Performance Tuning For Optimal Backup

The document discusses performance tuning for the optimal backup process on database servers using Oracle Recovery Manager (RMAN). It examines different RMAN parameter tunings like compression method and level to balance backup size, speed and CPU usage. A case study on a large government company in Indonesia is used to test various backup strategies and suggest the most suitable approach.

Uploaded by

waveneo
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Performance Tuning for Optimal Backup Process on Database Server

February 2020

Performance Tuning for Optimal Backup Process on Database Server

Aloysius Ari Wicaksono1, Brenda Sylviasyah2, Yosua Dwi Raharjo3

Abstract – With the current rise of the new technology and the need of data maintenance over the
looming risk of hardware failures or external threats, this paper aims to reassess the most optimal
backup process on IT Services by experimenting of different parameter tuning. The aim is to
suggest which combination is deemed suitable for a certain business requirement based on a real
study case. This study would use the data and requirement from one of the largest government
based company in Indonesia using Oracle backup tools: RMAN.

Keywords: Backup, Database, Oracle, RMAN

I. Introduction

Data is currently considered one of the most important assets for any organizations in nowadays. Data is so important
for running and improving the business systems, to the point of people saying that data has changed the company
workflow [1]., all in order to attain and retain the data. Hence data maintaining is now considered crucial as one of the
key point in IT services.
In order to protect the Availability and Integrity of the data on ‘High’ level, many measures [2], various in the
practices, are done to prevent such as theft, modification, writing error or any risk inducing activity which may cause
any loss for the company. These measures should be designed to not only protect the database from external harm, but
also to prepare the restoration function should something happen to the data. This is why periodical backup is a very
important procedure that must be done in any database.
Backup is a process of creating a duplicate instance by copying the data in a new archive, so that the data won’t be
lost should the primary database crashes, corrupted or lost [3].
There are two main purposes of a database backup. First of course is for the data restoration as stated above, the
second is to access information from a certain point on the past that may have been changed or updated on the current
primary database. These purposes would be taking almost the same amount of size, if not more, of what the primary
database currently is. This pushed the research and studies on how to design or help backup procedures, especially with
companies with such large amount of data; such with data de-duplication and compression, so they can achieve this
target.
In order to execute high availability and optimal disaster recovery strategy, a help from a trustworthy and reliable
backup system has a very important role to fill. For organizations which use Oracle based database, Oracle has provided
a comprehensive backup and recovery manager named RMAN. RMAN gives the user a certain degree of freedom of
tuning to help adjusting the user need. This study is aimed using a study case of a real company, to find the best suited
tuned parameter of backup which can be suggested to company for its backup procedure.
Studies and cases about optimizing the selection of backup method given the company condition has been
performed. Oracle being one of the preferred database system other than SQL server, has a different and unique

II. Theoretical Review

A. Case Study (Business Overview)

The Case study of this experiment would be taken on one of the largest government based company in Indonesia. The
requirement of the backup is performed daily and there are major transactions on the system every day with no service
time limit per day.
The company regulates that backups are to be performed with following schedule; Full backup once a week, and
incremental backup once a day while retaining the previous day’s backup for up to 7 days before the weekly full backup
Performance Tuning for Optimal Backup Process on Database Server
Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

as a measure for the data loss should the system fails.


According to [4].there are two kinds of backups; hot and cold depending on the database status. Backup which done
on a inactive database to ensure the consistency of the data when being restored in called ‘Cold Backup’ and in
opposite, when done on an active running database so that the service could still run on the database are named ‘Hot
Backup’.
Due to the activity of the database of this company is running 24 hours, this experiment would be performed on a
Hot backup condition.
For such, there are currently two database backup tools in the Oracle which can help meet the company needs, such
with OS and RMAN. While OS and RMAN provides the same result, what made it differs is the database activity state.
OS can only suitable for Cold Backup. Meanwhile RMAN is suitable for Hot Backup which is required for this case.

B. Recovery Manager (RMAN)

RMAN is a default provided tool which developed by the Oracle to backup the database which are needed for
recovery should the primary database fails. It covers the following types of files:
a. Data file, and image copy of datafile
b. Control file, and image copy of control file
c. Archived redo logs
d. Server parameters file
e. Backup pieces, which is automatically generated by the RMAN

It is also noteworthy that compression feature on RMAN can decrease the size of the database file up to 50-60% of
its original file.

There are a few different kinds of backup procedure type which are available by the RMAN:

1. Full Backup

This procedure would backup the whole process, data, and any other configuration which exists in the primary
database. Depending on the primary size, the setback of this type of procedure is the time and resource, as well as the
need of a large size of storage.

2. Archivelog Backup

Figure 1 ArchiveLog Backup. Source: https://docs.oracle.com/cd/B28359_01/server.111/b28310/archredo002.htm#ADMIN11332

As long the database in in the ARCHIVELOG mode and backups the archive log along with the data files,
inconsistent backups became the main reason for backup and restore.
A consistent backup is the one of the most important strategy for backing up most of the database, these archivelog
backups are the incremental data which can serve for the recovery file.

C. RPO and RTO

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

The strategy for Disaster Recovery must be based on the Business Continuity Plan (BCP) which would state the
target Recovery Point Objective (RPO) and the Recovery Time Objective (RTO)
RPO refers to how much of data can we restore, meanwhile RTO refers to how long we can restore the data to its
original instance. These two objectives relate closely to how long would the backup takes in each process, and the
results of the file size to be restored to achieve that time.
In order to achieve the state of “Zero Data Loss” when an emergency happens, an optimal backup process to achieve
a faster with acceptable degree of compression is needed.

D. RMAN Parameter Tuning

There are a few parameters which need to be set by the user to optimize their backup process to adjust in order to
meet the company needs.

1. Compression Method
RMAN would use a binary algorithm for compression process before writing the data into a set named
backupset.

This compression result is similar to the vendor’s compression results, though sadly there are official statements
from the oracle regarding the compression ratio for each algorithm parameters.

There are four types of binary compression settings on RMAN: Basic, Low, Medium and High. Here are the
performance table of how the compression mode correlates with the backup size and the CPU usage [5].

RMAN Compressio
Compression n Algorithm Algorithm Description
Type Used
compression ratio in the
range of MEDIUM, but
BASIC BZIP2 (LVL1) slower
smallest compression
LOW LZO ratio, fastest
good compression ratio,
MEDIUM ZLIB slower than LOW
highest compression
HIGH BZIP2 (LVL9) ratio, slowest
Table 1 RMAN Binary Compression Matrix

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

Bzip2 : bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm.
LZO : a lossless data compression algorithm that is focused on decompression speed.
ZLIB : zlib only supports one algorithm, called DEFLATE, that is a variation of LZ77 (Lempel–Ziv 1977), this
algorithm provides good compression on a wide variety of data with minimal use of system resources.

There are no official record for the aimed size and ratio of compression, but there is an experiment done with
following result [6]:

Compression Algorithm Size of Compressed Backup Time Taken

No 900 MB (Uncompressed) 2 Minutes


BASIC 280 MB 3.5 Minutes
HIGH 210 MB 7 Minutes
MEDIUM 240 MB 4.5 Minutes
Table 2 RMAN Compression Result Sample

2. Multiplexing/Parallel Processing

Multiplexing is a process to speed up the process of writing multiple data on a single disk, since there are
limitations when we only backup a single client/subclients. This process may not fully utilize the drive outputs
or even the tape speed, hence multiplexing.
By default, the data multiplexed is set by the default of 8 MAXOPENFILES and 64 FILESPERSET.
To illustrate it simple, hereby is the following schema with 3 Multiplexing condition (MAXOPENFILES =
3);

Figure 2 Multiplexing Schema

3. Storage Disk Type

There no one would deny a better hard drive would help a lot everything that requires a computational
process. Hard drive types are especially important to select since it would then be the storage of keeping the
most important data. Currently the most known hard disk types are SATA, SAS and SSD.
Here are the key differences to each hard disk types [7]:

Hereby are the following key differences between SATA, SAS, dan SSD [7]:
a. SATA (Serial ATA)
From all the type of hard disk available, SATA types of hard disk are usually priced lower and could answer
to the general needs of data storage on a PC.
The reason is because SATA are well known for their large capacity and a good power efficiency
management for data storing compared to its later SAS.

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

This type of hard disk also has no qualms of keeping any kinds of data format such as emails, files, webs to
archives. The following reasons make SATA so popular, especially for the storage system of a home PC.
b. SAS (Serial Attached SCSI)
The winning point of SAS Hard drives are the transfer speed and the durability. A SAS is optimized for
important application of organization so that it can run 24/7 with the high speed transfer of 15.000 RPM, but
they usually come in small storage capacity, making it not suitable for data storage in long term.
c. SSD (Solid State Drives)
SSD is a new hard disk type which offers 100x throughput compared to its predecessor. Compatible with
also any types of data, it is important to note that the main components of SSD are not made of spindle like
SATA and SAS. This not only reduced the risk of disk failure and much less power consumed when writing
the data, but also overcomes the speed (since there are no spinning) and durability.
The setback for SSD is that the price for one unit can be far above its two predecessor, making it less
obtainable for any company scaled storage.

III. Related Works

IV. Methodology

In this experiment, we will test 16 multiplex and compression level combination. Parameter combination with best
scoring value will be the best parameter for this experiment.
Step by step for this experiment:
prepare a RMAN backup folder and script containing the backup software configuration:
Prepare 2 folders:
backup_script: backup configuration script saving, backup_log: storing log files resulting from backup testing.

Inside backup script folder, create 4 folders based on compression level.

Within each folder the compression level will create a backup script based on the multiplexing level to be tested:

The following are configuration details of the 4 level multiplexing that will be tested:

The following are configuration details of the 4 level multiplexing that will be tested:

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

The output results are compared to measure output size backup, time taken and output rate to find out the most optimal
backup software and infrastructure for each database being tested.

Data collection test results using sqlplus software, as follows:

Then from the log file that has been created will be matched with the recap:

log file example:

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

V. Experiment

With defining the correct tuning configuration for each of the parameter above, this study expect to be able to
suggest which parameter combination with the best compatibility to answer the study case needs for “Zero Data Loss”
in case the need arise.
The experiment for the backup database would be done on RMAN tools from Oracle, and it would be done on Hot
Backup schema with Full Backup Scenario.
The parameters which would be taken to consideration are as follow: Storage Service Level, Parallel Process
(Multiplexing) and Compression Level.

For the Service Level Storage, there are levels to associate the disk speed with the desirable disk types stated on
above. Here is the following table for the type of configuration on VMAX:

Table 3 VMAX SLO performance

This experiment would only currently assess the platinum type which emulates the SAS level performance with 15k
Drive above performance.
Hereby is the specification of the Database Server we are going to use in this study:

Database Server Specification:


Exadata X4-2 (6 node):
 CPU : Intel(R) Xeon(R) CPU E5-2697 v2
@2.70GHz 24 core
 Memory : 256 GB
 OS : Oracle Linux Server release 5.10
 RDBMS : Oracle Linux Enterpraise 11.2.0.4
 Size database : 812,82 GB

Storage Server Specification:


 CPU : Intel(R) Xeon(R) CPU E5-2603 v4
@1.70GHz 2 core
 Memory : 16 GB
 OS : Linux CentOS release 6.8 (Final)
 Capacity : 300TB

Monitoring Server Specification:


 CPU : Intel(R) Xeon(R) CPU E5-2690 0
@ 2.90GHz 16 core
 Memory : 64 GB
 Storage : 80gb (OS + Apps)
 OS : Oracle Linux Server release 5.11

With the topology as follows:

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

Figure 3 Backup Process Topology

With the need of the company, we tried to assess the ideal result for scoring our results. The scoring would follow
with each result assessment in this threshold:

Output Size Output Rate


Score
Min Max Min Max
1 700 >700 0 1
2 450 700 1 5
3 200 450 5 10
4 160 200 10 45
5 140 160 45 90
6 120 140 90 130
7 100 120 130 150
8 90 100 150 300
9 80 90 300 450
10 0 80 450 >450
Table 4 Scoring Threshold (Output Size and Output Rate)

Time Elapsed CPU Usage


Score
Min Max Min Max
1 600 >600 70 100
2 240 600 50 70
3 180 240 40 50
4 120 180 35 40
5 60 120 30 35
6 30 60 25 30
7 18 30 20 25
8 12 18 16 20
9 6 12 12 16
10 0 6 0 12
Table 5 Scoring Threshold (Time Elapsed and CPU Usage)

- Output size : the smaller output size value will give better scoring value
- Output rate : the bigger output rate value will give better scoring value
- Time elapsed : the faster backup process will give better scoring value
- CPU Usage : the smaller CPU used for backup process will give better scoring value

The database which is going to be experimented is totaling in 800GB size and restricted to RMDB types.
Backup experiments will give results like table 6.

Multiplexing INPUT OUTPUT Time CPU


Compression
/Parallel SIZE SIZE Taken Usage

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

Process (GB) (GB) (%)


1x X X X X
8x X X X X
BASIC
16x X X X X
24x X X X X
1x X X X X
8x X X X X
LOW
16x X X X X
24x X X X X
1x X X X X
8x X X X X
MEDIUM
16x X X X X
24x X X X X
1x X X X X
8x X X X X
HIGH
16x X X X X
24x X X X X

Table 6 Experiment Result Model

Then it will be converted into scores based on threshold defined:

Output Time Output CPU


Multiplex
Compression Size Elapsed Rate Usage Total
ing (Score) (Score) (Score) (Score)
1x X X X X X
8x X X X X X
BASIC
16x X X X X X
24x X X X X X
1x X X X X X
8x X X X X X
LOW
16x X X X X X
24x X X X X X
1x X X X X X
8x X X X X X
MEDIUM
16x X X X X X
24x X X X X X
1x X X X X X
8x X X X X X
HIGH
16x X X X X X
24x X X X X X

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

Table 8 Experiment Scoring Model

VI. Result

We set a scenario with the parameter discussed on the above. We look into 4 types of different compression types
with compression divided into 4 multiplexing scenario (1x, 8x, 16x and 24x) on a Platinum Service Level Storage. The
result are as follow:
Compression Multiplex INPUT
Output
CPU
SIZE Time Taken Usage
ing (GB)
Size (GB)
(%)
1x 812,82 106,73 03:28:12 11,94
8x 812,82 106,10 00:35:42 23,42
BASIC
16x 812,82 106,01 00:25:10 31,57
24x 812,82 105,69 00:23:18 41,91
1x 812,82 151,95 00:37:11 11,59
8x 812,82 151,10 00:08:02 18,30
LOW
16x 812,82 151,10 00:05:35 15,98
24x 812,82 151,11 00:04:47 41,91
1x 812,82 120,56 01:46:25 11,94
8x 812,82 120,73 00:18:14 18,55
MEDIUM
16x 812,82 120,70 00:13:13 28,68
24x 812,82 120,67 00:12:05 29,59
1x 812,82 79,26 15:55:25 23,42
8x 812,82 79,54 03:46:35 34,99
HIGH
16x 812,82 78,87 02:44:25 32,92
24x 812,82 78,66 01:33:09 46,75
Table 9 Experiment Result

We then convert the result into scores based on the threshold defined:

Output Time Output CPU


Multiplex
Compression Size Elapsed Rate Usage Total
ing (Score) (Score) (Score) (Score)
1x 7 3 3 9,5 22,5
8x 7 6,5 5 7 25,5
BASIC
16x 7 7 5 5 24
24x 7 7 5 3 22
1x 3 6,5 5 10 24,5
8x 3 9 9 8 29
LOW
16x 3 10 10 9 32
24x 3 10 10 3 26
1x 6 5 4 10 25
8x 6 7 6 8 27
MEDIUM
16x 6 8 8 6 28
24x 6 8 8 6 28
1x 10 1 2 7 20
8x 10 3 3 5 21
HIGH
16x 10 4 3 5 22
24x 10 5 4 3 22
Table 10 Experiment Scoring

Low compression with 16x Multiplex combination give best result for this experiment.

VII. Conclusion

From this study case we can take the conclusion that the most suggested parameter tuning for this study case of a Hot
Backup Process with 812,12 GB size file is better with Low Compression and 16x Multiplexing on a Platinum Service
Level Storage, for it only takes the time of 05:35 Minutes for compressing a 812,82 GB file into 151,11 GB and only
taking 15,98% of CPU Usage.
Other noteworthy result is that the High Compression based can compress up to 10:1 size ratio on the cost of using
at least 23.5% of the resource with a very long runtime or 46,75% of CPU resource with at least 1.5 hours runtime.

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

Basic and medium compression are also head to head in scoring with only a slight difference for 8x multiplexing rate
and above, if the user is going for more efficient space storage, it is suggested to use the medium compression for the
slight difference in time.

VIII. Further Works

For further works, experiments can use a variety of database sizes, from small database to large database so that it
can be seen which parameters are more effectively for small database and large database.

References

[1] T. M. Connolly and C. E. Begg, "A Constructivist-Based Approach to Teaching Database Analysis and Design,"
Journal of Information Systems Education, Vol. 17(1), 2006.
[2] R. Schiesser, "Prentice Hall," in IT Systems Management (2nd ed.), 2010.
[3] Y. Son, J. Choi, J. Jeon, C. Min, S. Kim, H. Y. Yeom and H. Han, "SSD-assisted Backup and Recovery for
Database Systems," IEEE 33rd International Conference on Data Engineering, pp. 285-296, 2017.
[4] Q. Li and H. Xu, "Research on the Backup Bechanism of Oracle Database," International Conference on
Environmental Science and Information Application Technology. Vol. 2., pp. 423-426, 2009.
[5] TheGeekDiary. [Online]. Available: https://www.thegeekdiary.com/beginners-guide-to-rman-compression-for-
backups/.
[6] WysheidTeam, "General DBA Activities," [Online]. Available: https://www.wysheid.com/blog/general-dba-
activities/rman-compressed-backup-learn-reduce-size-oracle-database-backups/.
[7] Aventi. [Online]. Available: (https://www.aventissystems.com/blog-Hard-Drive-Comparison-SATA-SAS-SSDs-
s/12079.htm) .
[8] Intel it center, "Evaluating apache hadoop software for big data etl," White Paper, 2014.
[9] A. Yamashita, T. Masuda, M. Kodera, T. Fukui and R. Tanaka, "Data Archiving and Retrieval for Spring-8
accelerator complex.," 1999.
[10] P. Vassiliadis, A. Simitsis and S. Skiadopoulos, "Conceptual modeling for ETL processes.," Proceedings of the
5th ACM international workshop on Data Warehousing and OLAP, p. 1421, 2002.
[11] C. Thomsen and T. Bach Pedersen, "pygrametl: A powerful programming framework for extract-transform-load
programmers.," Proceedings of the ACM twelfth international workshop on Data warehousing and OLAP, pp. 49-
56, 2009.
[12] M. Stonebraker, D. Abadi, D. J. DeWitt, S. Madden, E. Paulson, A. Pavlo and A. Rasin, "MapReduce and parallel
DBMSs: friends or foes?," Communications of the ACM, vol. 53, no. 1, pp. 64-71, 2010.
[13] S. Srivastava and R. Misra, "Archiving ERP data to enhance operational effectiveness: the case of Dolphin,"
International Journal of Information Technology and Management, vol. 16, no. 2, pp. 162-172, 2017.
[14] V. Ranjan, "A Comparative Study between ETL (Extract-Transform-Load) and E-LT (Extract-Load-Transform)
approach for loading data into a Data Warehouse," Research Paper, 2009.
[15] B. Pan, G. Zhang and X. Qin, "Design and realization of an ETL method in business intelligence project.," 2018
IEEE 3rd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), pp. 275-279, 2018.
[16] Oracle, "Tuning Oracle Recovery Manager," Oracle White Paper, 2001.
[17] S. Misra, S. K. Saha and C. Mazumdar, "Performance comparison of hadoop based tools with commercial etl
tools–a case study.," International Conference on Big Data Analytics, pp. 176-184, 2013.
[18] J. Mensching and G. Corbitt, "ERP data archiving–a critical analysis," Journal of Enterprise Information
Management, vol. 17, no. 2, pp. 131-141, 2004.
[19] X. Liu, C. Thomsen and T. B. Pedersen, "ETLMR: a highly scalable dimensional ETL framework based on
mapreduce.," Transactions on Large-Scale Data-and Knowledge-Centered Systems VIII, pp. 1-31, 2013.
[20] A. Kemper and T. Neumann, "HyPer: A hybrid OLTP&OLAP main memory database system based on virtual
memory snapshots.," 2011 IEEE 27th International Conference on Data Engineering, pp. 195-206, 2011.
[21] S. R. Jeong, Y. G. I. Kim and J. H. Kim, "A new database archiving approach for effective storage and data

Performance Tuning for Optimal Backup Process on Database Server


Aloysius Ari Wicaksono, Brenda Sylviasyah, Yosua Dwi Raharjo

management: a case study of data warehouse project in a Korean bank," International Journal of Advance Soft
Computing Application, vol. 6, no. 3, pp. 31-46, 2014.
[22] S. S. Guo, Z. M. Yuan, A. B. Sun and Q. Yue, "A new ETL approach based on data virtualization.," Journal of
Computer Science and Technology, vol. 30, no. 2, pp. 311-323, 2015.
[23] A. Grover, J. Gholap, V. P. Janeja, Y. Yesha, R. Chintalapati, H. Marwaha and K. Modi, "SQL-like big data
environments: Case study in clinical trial analytics.," 2015 IEEE International Conference on Big Data (Big
Data), pp. 2680-2689, 2015.
[24] P. S. Diouf, A. Boly and S. Ndiaye, "Performance of the etl processes in terms of volume and velocity in the
cloud: State of the art.," 2017 4th IEEE International Conference on Engineering Technologies and Applied
Sciences (ICETAS), pp. 1-5, 2017.
[25] J. Chakraborty, A. Padki and S. K. Bansal, "Semantic ETL—State-of-the-Art and Open Research Challenges.,"
2017 IEEE 11th International Conference on Semantic Computing (ICSC), pp. 413-418, 2017.
[26] L. Cao, Z. Li, K. Qi, G. Xin and D. Zhang, "An efficient data extracting method based on hadoop.," International
Conference on Cloud Computing, pp. 87-97, 2014.
[27] S. Brandl and P. Keller-Marxer, "Long-term archiving of relational databases with Chronos.," First International
Workshop on Database Preservation (PresDB’07), 2007.
[28] M. Bala, O. Boussaid and Z. Alimazighi, "Big-ETL: extracting-transforming-loading approach for Big Data.,"
Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications
(PDPTA), p. 462, 2015.

Performance Tuning for Optimal Backup Process on Database Server

You might also like