0% found this document useful (0 votes)
9 views

Snapshot Based Disaster Recovery On Cloud

Uploaded by

tegosuyatno
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Snapshot Based Disaster Recovery On Cloud

Uploaded by

tegosuyatno
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

International Journal of Engineering and Advanced Technology (IJEAT)

ISSN: 2249-8958 (Online), Volume-9 Issue-3, February 2020

Snapshot Based Disaster Recovery on Cloud


Vishnu A., Arokia Paul Rajan R.

we will get plenty of computer system resources without the


Abstract: Nowadays, data has been produced in a larger involvement (direct) of the users [1].
amount. So, it is important to have some recovery issues also Cloud computing may be defined as a data center that is
related to it. Cloud computing refers to the interconnection of
available to an infinite number of users. The cloud storage
various systems for the purpose of sharing resources. Cloud is one
of the largest platforms that is growing rapidly in the IT sector. may be limited to a single or multiple number of
There are many private data that are related to cloud. Therefor the organizations depending upon the usage. The experts came
need for data recovery in the cloud is gaining more importance into a conclusion that after the introduction of cloud, IT
day by day. For this an efficient and effective data recovery infrastructure cost was able to reduce into a larger amount.
techniques are to be developed. Recovery of data helps the users to The advantages of cloud are not only restricted towards one
collect information on any backup servers whenever the server is
field. It has got several advantages in different fields. After
down. Many solutions have been developed for the disaster
recovery. This research paper mainly discusses about some of the the introduction of cloud, the users were able to run their
techniques and solutions that are needed to recover the data in application faster by maintaining its manageability, so that IT
cloud architectures. This work proposed snapshot based backup teams were able to meet up their requirements more than
technique for the databases and implemented successfully. what they expected [2].
There is an infinite amount of data is stored in the cloud.
Keywords: Cloud computing, Disaster recovery, Data recovery,
These data may be of high importance for an organization or
Backup, Disaster recovery planning.
for a company. So, the data stored in the cloud should be
I. INTRODUCTION highly protected. When there is a sudden system crash or
when power failure occurs, the data stored in the cloud will
be lost. It may even cause huge financial loss for some
C loud computing is one of the fastest growing organizations. The data can be lost due to natural disasters
technology in the IT sector. Cloud computing refers to the also. When disaster occurs, the data stored in the cloud
interconnection of various systems for the purpose of sharing should be protected. The trending IT companies including
resources. It is a technology that is internet based. Cloud Google, Microsoft also have faced this type of disaster loss.
computing is implemented mainly for the reason for sharing When something happens to our system or in other words,
resources. With the invention of cloud computing it replaces when something happens in the client side, the user will not
older technologies like grid computing. There are many have to worry about the data that is stored in the system.
advantages that the clients receive after the usage of cloud Because the data will be automatically saved in the cloud. So
computing. With the use of cloud computing the user was that user can easily recover the data from the cloud. But this
able to get supercomputing and high computing power for a is not the case when something happens vice-versa. That is,
cheaper cost. Cloud computing, mainly involves the sharing when something happens to the cloud entire data will be lost.
of computing resources, therefore there will be a large Natural disaster occurs due to bad weather, deforestation,
number of users who shares the same storage and other sand mining etc. When disaster occurs nothing can be done
computing resources. As a result of this there is a strong need by human to keep the disaster away. We have to face the
for a mechanism to prevent other users to access useful data consequences of disaster [3].
either intentionally or accidentally or there may be some The background and related works are being discussed in
users who will try to modify your data. The stored data will Section II. Section III depicts the problem associated with
be also affected by the natural disasters like flood, fire etc. the previous work and the experimental setup under which
Once it is severely affected by the natural calamities it would the proposed algorithm has been implemented. Section IV
be very difficult to recover the data. The term cloud presents the results and a discussion of the experiment
computing may be defined in more details as a system where performed, Section V presents the conclusion and the future
scope of the proposed technique.
Revised Manuscript Received on February 05, 2020.
* Correspondence Author II. RELATED WORKS
Vishnu A., MSc Student, Department of Computer Science, CHRIST
(Deemed to be University), Bangalore, India.
Kriti Sharma proposed an algorithm against the disaster
E-mail: [email protected] which is known as Seed Block Algorithm [4]. This Algorithm
Arokia Paul Rajan R.*, Associate Professor, Department of Computer is suggesting a remote backup server. It is located in the
Science, CHRIST (Deemed to be University), Bangalore, India.
remote location. The algorithm deeply explains the solutions
E-mail: [email protected]
whenever there is a data loss. Nowadays, many inventions
© The Authors. Published by Blue Eyes Intelligence Engineering and have made for the problem of data loss in cloud computing.
Sciences Publication (BEIESP). This is an open access article under the CC
BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Retrieval Number: C5259029320/2020©BEIESP


Published By:
DOI: 10.35940/ijeat.C5259.029320
775 Blue Eyes Intelligence Engineering
Journal Website: www.ijeat.org
& Sciences Publication
Snapshot Based Disaster Recovery on Cloud

Among them most common methods are seed block technology. This method is having two sequences. They are
algorithm, Parity cloud service, High distribution and rake Backup sequence and Recovery sequence. In Backup
technology, Shared backup router resources, efficient sequence users are storing data that are to be backed-up.
Routing grounded on taxonomy. Recovery sequence is used when a disaster occurs in the main
The seed block algorithm is defined as the time efficient server. Like any other methods this method has also got
algorithm that is used to recover the file. These algorithms certain limitations. Due to this limitation this method cannot
consist of several advantages like it maintains data integrity, be considered as a perfect data recovery mechanism.
solves problems in implementation, cost and complexity. The Although this method can be implemented in devices like
algorithm collects information from the user and then mobile phone, laptops etc., The cost of data recovery is very
recovers the file if the file has been deleted accidentally or high and there is also chance for increased redundancy.
intentionally. This Algorithm focuses on simplicity of the Cold and Hot Back-Up Strategy method was proposed by
back-up and recovery. Seed block algorithm mainly consists Lili Sun, Yang Yang [7]. It is a trigger-based back-up and
of a remote backup server, main cloud, and its clients. It uses recovery mechanism which becomes active when failure is
the concept of Exclusive– OR (XOR) operation of the detected. In CBSRS (Cold Backup Service Replacement
computing world. If the user loses his data from the main Strategy) recovery mechanism the trigger becomes active
cloud due to some reasons, the user can get back the original when a failure is detected (Service Failure) and when there is
file by EXORing the file with the seed block of that particular no failure trigger becomes inactive. But in HBSRS (Hot
file. In seed block algorithm, there will be always a random Backup Service Replacement Strategy) the process is
number and a unique id for the client and whenever client different. HBSRS method is also known as Transcendental
register in the main cloud then the random number and the Recovery Strategy, which is used in dynamic network. When
client id get EXORed each other in order to produce the seed the implementation of the process gets started, the backup
block algorithm for that particular client. services will remain in the activated state itself. Then the first
Whenever a client creates a file in the main cloud it is obtained results from the service will be used to make sure
stored in the main cloud. Then these main files are EXORed the successful implementation of service composition. The
with the seed block algorithm of that particular client. This main disadvantage of this mechanism is that as data increases
EXORed file is then stored in the form of a file on the remote the cost for implementation also increases.
server. If due to any reason, if the file gets destroyed then the Backup for Cloud and Disaster Recovery for Consumers
user can recover the original file by EXORing the file with and SMBs method was proposed by Vijaykumar Javaraiah
seed block algorithm of the corresponding to the client. The for data backup in the cloud and also for disaster recovery [8].
seed block algorithm gives more importance to the security In this method, the cost of backing up the data in cloud will be
issues, so that it protects the data without using any of the reduced and also protects the data from disaster. The process
existing encrypted techniques. Though it has got several of migrating the data from one cloud to another cloud is easy
advantages it also got demerits. The files that are stored in the in this process. Since consumers do not depend upon service
remote server uses the same space in the cloud which leads to providers in this method, it reduces the cost of recovering the
wastage of memory space. Thus, this method becomes data. The only thing that is used to recover the data in this
inefficient. The efficiency of this method can be increased by method is a hardware box.
applying compression techniques to the files. Rent out the Rented Resources method was proposed by
Sheheryar Malik [9]. Reducing the monetary cost of service
is the main aim of this model. This model has three phases as
follows:
1. Discovery
2. Matchmaking
Fig 1. Sample Output Image of SBA Algorithm 3. Authentication
Efficient Routing Grounded on Taxonomy method was This method uses the concept of cloud vendors that will
proposed by Paolo Messier [5]. The method is based on rent the resources from various ventures and immediately
semantic analysis. This method does not depend on time after the virtualization it will rent these resources to clients.
complexity. This model is made of 3 components: The parity cloud service model proposed by Chi-Won
1. (DHT) Distributed Hash Table Song [10]. This method provides high service for the
2. (SON) Semantic Overlay Network personal data. This method is different from other methods,
3. Measure of semantic similarity because in this method user need not to upload the data on the
This method provides an efficient way of retrieving data server. The recovery services that are provided by the client
that are based on semantic similarity. The semantic similarity side are stored within a reasonable bound. The main
is between service description’s and service requests. This disadvantage of this method is that its implementation
method proposes a semantic-driven query which will be complexity is high.
answered in a DHT based system by implementing a SON Shared Backup Router Resources method was proposed by
over DHT. The main disadvantage of this method is Eleni Palkopoulou [11].
increased time and implementation complexity.
Noriharu Miyaho has proposed High distribution and Rake
technology a Backup model called HS-DRT that uses the
concept of ultra-widely distributed data transfer mechanism
[6]. This mechanism has also got high speed encryption

Retrieval Number: C5259029320/2020©BEIESP


DOI: 10.35940/ijeat.C5259.029320 Published By:
Journal Website: www.ijeat.org 776 Blue Eyes Intelligence Engineering
& Sciences Publication
International Journal of Engineering and Advanced Technology (IJEAT)
ISSN: 2249-8958 (Online), Volume-9 Issue-3, February 2020

This method mainly aims at reduction of implementation snapshots.


cost and provides a solution when the router fails. This
method consists of an IP address which will not get affected
even if the router fails.
This model shows how the outer requirements have a
direct impact on the SBRR architecture. The main
disadvantage of this method as it reduces the implementation
cost, it will be unable to optimization concept.

III. PROPOSED ALGORITHM & EXPERIMENT


SETUP
Based on the extensive study of the literature, this work
proposes Snapshot based Disaster Recovery technique. An
application is developed using Python using Django
framework. A sample database has been created using
Fig. 3. Repository creation for backup files
MySQL and Node js is used to deploy the application
architecture. The proposed Snapshot based Disaster
Figure 4 shows the execution of the backup process by
Recovery Technique is presented as algorithm as follows:
node js component.
Step 1: select database for backup. A sample MySQL
database is shown in figure 1.

Fig. 4. Execution of backup process


Fig. 1. Sample database
Figure 5 presents the creation of a snapshot of the database
Figure 2 shows the execution of connection from the in a specified time interval.
application environment.

Fig. 5. Creation of snapshot of the database


Fig. 2. Connection to the Database
Step 6: Store the base64 string into another database.
Step 2: Set time interval for backup. Step 7: Connect the database for the backup which is
Step 3: Call the backup function in each interval. snapshotted.
Step 4: Create .sql file of database using MySQL dump in Step 8: Fetch data from backup database of particular data
node.js and time.
Step 5: Convert the .sql file into base64 string.
Figure 3 shows the creation of repository store for the backup

Retrieval Number: C5259029320/2020©BEIESP


Published By:
DOI: 10.35940/ijeat.C5259.029320
777 Blue Eyes Intelligence Engineering
Journal Website: www.ijeat.org
& Sciences Publication
Snapshot Based Disaster Recovery on Cloud

Step 9: Convert the base64 string into .sql file. security of data. Resolving this issue will be an extension of
Step 10: Snapshot of the database created. this research. Fig. 7 presents time taken for creation of
Figure 6 presents the snapshots of the database with time snapshots in remote servers in a sample experiment.
intervals. Each time stamped snapshot can be a recovery
backup for the application.

Fig. 7 The snapshot creation time of different blocks

Fig. 8 shows the results in terms of average I/O response


time for different snapshots sizes.
Fig. 6. Automatic Snapshot of the database

Thus the database has been successfully created and


stored. The data is stored the database as .sql file at every
minute in a folder. Next these .sql file is converted into
base64 encrypted data and gets stored in another database in
another server. Based on the requirement and application
nature, it is possible to change the time interval from one
minute to whatever time slice the user wants.

IV. RESULTS DISCUSSION


The implemented technique is efficient because the
databases are time stamped as well as gives high importance
to the data security. This method stores the database as SQL Fig. 8 Average I/O response time of Read and Write of
file at every minute in a folder. Next we convert the SQL file Snapshots
into base64 encrypted data and stored in another database on
another server. Later, when we are recovering data, the data is V. CONCLUSION
again decrypted as normal data. An application has been
created and suitable experiments conducted. The Cloud computing is becoming very popular in our day
performance of the recovery system is observed with the today life and every organization are working based on cloud
parameter of Write Time and Read Time. computing. At first, they were not aware about the disasters
and also about the recovery mechanisms in cloud. When
Table 1. Snapshot creation time and I/O time disaster occurs, the company has to face huge losses,
measurement based on Write & Read Time especially financial losses because of the absence of proper
Snapshot recovery mechanisms.
Snapshot creation In this paper, we proposed and implemented Snapshot
block time Write Time Read Time based data restoration technique. Since the proposed method
size (Seconds) (ms) (ms) is based on time stamping, it is easy make recovery solutions.
The method is able to recover the files that are stored in the
64kb 2.76 18 8 databases within seconds. This method also focuses on the
8kb 0.72 16 9.5 security concepts of the data while recovering. For that while
4kb 0.3 15 8 recovering the data that is stored in the database, the data is
0.5kb 0.06 10 8 first encrypted for protection. In this method user stores the
database as an SQL file at every minute in a folder. After that
we convert the SQL file into base64 encrypted data and it is
From the experimental results, it is obvious that write
stored in another database on another server. Later, when we
snapshot performs well for read-intensive workloads, while
recover the data it is decrypted to normal data. So, it ensures
read snapshot performs well for write-intensive workloads.
the security of the data. The experimental results of this
The proposed technique found to be efficient to handle research substantiate the storage solution designer to design
unexpected damages to the databases, particularly for the economic backup solutions.
cloud application architectures. Although the proposed
method got advantages, it also has certain disadvantages [12].
In remote backup server, the data is not highly secure. By
encrypting and decrypting of data we are ensuring high

Retrieval Number: C5259029320/2020©BEIESP


DOI: 10.35940/ijeat.C5259.029320 Published By:
Journal Website: www.ijeat.org 778 Blue Eyes Intelligence Engineering
& Sciences Publication
International Journal of Engineering and Advanced Technology (IJEAT)
ISSN: 2249-8958 (Online), Volume-9 Issue-3, February 2020

The cost of implementing this method compared to other


methods is also cheap. Securing the backup servers with
more security features will be the extension of research work.

REFERENCES
1. Barrie Sosinky, Cloud Computing Bible, Wiley India Pvt. Ltd, 2010.
2. S. Hamadah and D. Aqel, "A Proposed Virtual Private Cloud-Based
Disaster Recovery Strategy," 2019 IEEE Jordan International Joint
Conference on Electrical Engineering and Information Technology
(JEEIT), Amman, Jordan, 2019, pp. 469-473.
3. M. M. Alshammari, A. A. Alwan, A. Nordin and I. F. Al-Shaikhli,
"Disaster recovery in single-cloud and multi-cloud environments:
Issues and challenges," 2017 4th IEEE International Conference on
Engineering Technologies and Applied Sciences (ICETAS),
Salmabad, 2017, pp. 1-7.
4. K. Sharma and K. R. Singh, "Seed Block Algorithm: A Remote Smart
Data Back-up Technique for Cloud Computing," 2013 International
Conference on Communication Systems and Network Technologies,
Gwalior, 2013, pp. 376-380.
5. G. Pirró, P. Trunfio, D. Talia, P. Missier and C. Goble, "ERGOT: A
Semantic-Based System for Service Discovery in Distributed
Infrastructures," 2010 10th IEEE/ACM International Conference on
Cluster, Cloud and Grid Computing, Melbourne, VIC, 2010, pp.
263-272.
6. Y. Ueno, N. Miyaho, S. Suzuki and K. Ichihara, "Performance
Evaluation of a Disaster Recovery System and Practical Network
System Applications," 2010 Fifth International Conference on Systems
and Networks Communications, Nice, 2010, pp. 195-200.
7. L. Sun, J. An, Y. Yang and M. Zeng, "Recovery strategies for service
composition in dynamic network," 2011 International Conference on
Cloud and Service Computing, Hong Kong, 2011, pp. 60-64.
8. V. Javaraiah, "Backup for cloud and disaster recovery for consumers
and SMBs," 2011 Fifth IEEE International Conference on Advanced
Telecommunication Systems and Networks (ANTS), Bangalore, 2011,
pp. 1-3.
9. C. Song, S. Park, D. Kim and S. Kang, "Parity Cloud Service: A
Privacy-Protected Personal Data Recovery Service," 2011IEEE 10th
International Conference on Trust, Security and Privacy in Computing
and Communications, Changsha, 2011, pp. 812-817.
10. S. Malik and F. Huet, "Virtual Cloud: Rent Out the Rented Resources,"
2011 International Conference for Internet Technology and Secured
Transactions, Abu Dhabi, 2011, pp. 536-541.
11. E. Palkopoulou, D. A. Schupke and T. Bauschert, "Recovery Time
Analysis for the Shared Backup Router Resources (SBRR)
Architecture," 2011 IEEE International Conference on
Communications (ICC), Kyoto, 2011, pp. 1-6.
12. O. H. Alhazmi and Y. K. Malaiya, "Evaluating disaster recovery plans
using the cloud," 2013 Proceedings Annual Reliability and
Maintainability Symposium (RAMS), Orlando, FL, 2013, pp. 1-6.

AUTHORS PROFILE
Vishnu A, pursuing MSc Computer Science from the
Department of Computer Science, CHRIST (Deemed to
be University), Bengaluru, India. His area of research
interest is Cloud computing.

Dr Arokia Paul Rajan R., is currently working as


Associate Professor, Department of Computer Science,
CHRIST (Deemed to be University), Bangalore, India.
He holds Ph.D. in Computer Science & Engineering.
His research area is data management in Cloud
architectures.

Retrieval Number: C5259029320/2020©BEIESP


Published By:
DOI: 10.35940/ijeat.C5259.029320
779 Blue Eyes Intelligence Engineering
Journal Website: www.ijeat.org
& Sciences Publication

You might also like