0% found this document useful (0 votes)
18 views

ISM Body-Report Format

Uploaded by

itzme.divzatt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

ISM Body-Report Format

Uploaded by

itzme.divzatt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

TABLE OF CONTENTS

Acknowledgement ii

Table of Contents iv

List of Figures v

List of Tables v

Chapter 1 week 1 Page No

Chapter 2 week 2 Page No

Chapter 3 week 3 Page No

Chapter 4 week 4 Page No

Chapter 5 Final Solution

Chapter 6 Conclusion Page No

References Page No
Chapter 1

Case Study
1.1 Introduction

The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.

Chapter 2

Industry Session Summary on Virtual Storage


Provisioning and Virtualization
1.1 Introduction

The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.

SCEM | Mangalore Page 1


Chapter 3

Peer to Peer Learning


1.1 Introduction

The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.

Chapter 4

Industry Session Summary on Backup, Archive and


Replication in Classic and Virtualized Environments
4.1 Introduction

The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.

SCEM | Mangalore Page 2


4.2 Data Backup

Data backup refers to creating a copy of your data and storing it in a separate location. Its
purpose is to ensure you can recover lost information due to accidents, hardware failures, or
cyberattacks. There are three main types: full backups copy everything at once, incremental
backups only copy what's changed since the last backup, and (though not very common)
cumulative backups combine all incremental backups into a single file. Backups offer peace
of mind and data security but can require additional storage space and take time to set up and
maintain.

4.2.1 Backup Workflow

The backup process can be initiated by either the server or client software. This triggers the
creation of backup copies based on the defined schedule. Backups can be configured to run
automatically at specific times or based on events. Backup clients communicate with the
backup server to identify data needing backup, transfer files, and verify successful
completion. The server manages these processes and coordinates with storage nodes for
secure data placement.

The restore process involves retrieving lost or corrupted data from backups. Users initiate a
restore request, specifying the desired files or system state. The backup server locates the
necessary data in the chosen storage media and transfers it back to the designated location.

In Backup Media, different options offer varying tradeoffs. Tape provides high capacity and
long archival life but slow access. HDDs are cost-effective but vulnerable to damage. SSDs
are faster but pricier. Cloud storage offers scalability and remote access at an ongoing cost.
NAS is user-friendly but requires network connectivity. Optical disks are durable for long-
term storage but have limited capacity and slower access.

Virtualization introduces a new layer of complexity to backups. Two primary approaches


exist:

1. Agent-based Backup: A software agent is installed on each virtual machine (VM) to


identify and transfer VM data for backup. This provides granular control but requires
agent deployment on each VM.

SCEM | Mangalore Page 3


2. Hypervisor-based Backup: The backup software interacts directly with the hypervisor
software managing the VMs. This simplifies deployment but may offer less granular
control over individual VM backups.

1.3 Data Replication

Data replication is the process of creating and maintaining identical copies of your data in
multiple locations. This redundancy serves several critical purposes. Primarily, it enhances
data availability by ensuring information remains accessible even if a storage device or
server fails. Replication also improves performance by allowing geographically dispersed
users to access data from the closest copy, reducing latency. Furthermore, it can facilitate
disaster recovery by providing a readily available source for data restoration in case of major
outages.

4.3.1 Purpose of Data Replication

The purpose of data replication extends beyond basic data redundancy. It offers a multi-
faceted approach to data management:

● High Availability: Replication creates geographically dispersed copies, ensuring data


remains accessible even during hardware failures or planned maintenance events.
This minimizes downtime and maximizes service continuity.
● Scalability and Performance: Replicating data across multiple servers distributes read
workloads, improving responsiveness for geographically distributed users by
allowing them to access data from the closest copy. This can be particularly beneficial
for geographically dispersed deployments.
● Disaster Recover: By maintaining synchronized copies, replication allows for faster
data restoration during a catastrophic outage. This minimizes data loss and
streamlines recovery times.
● Load Balancing: Replication can distribute write workloads across multiple servers,
alleviating bottlenecks and improving overall system performance.

4.3.2 Categories of Data Replication

Data replication can be categorized into two main approaches based on the timing of data
updates on the replica: synchronous and asynchronous.

SCEM | Mangalore Page 4


Synchronous Replication:

● Writes are acknowledged: Data is written to the primary storage and the replica
simultaneously. Both systems acknowledge successful completion before the
operation is considered finished.
● High Data Consistency: This guarantees that the primary and secondary copies are
always identical, offering zero data loss and minimal lag between updates.
● Performance Impact: Due to the wait for acknowledgment, synchronous replication
can introduce latency, especially for geographically dispersed locations or large data
volumes.
● Suitable for mission-critical applications requiring high data consistency and
immediate availability, even in geographically close locations.

Asynchronous Replication:

● Eventual Consistency: Data is written to the primary storage first, and then replicated
to the secondary at a later time. This allows for faster writes on the primary system
but introduces a potential lag between the primary and secondary copies.
● Scalability and Performance: Asynchronous replication offers better performance for
large data transfers or geographically dispersed locations as it doesn't require waiting
for acknowledgment.
● Potential Data Loss: There's a small chance of data loss if a disaster strikes before
data is replicated, as the secondary copy might not be fully up-to-date.
● Suitable for less critical data, disaster recovery scenarios where some data loss is
acceptable, or geographically dispersed locations where latency is a concern.

4.4 Data Archival

Data archiving is the systematic process of storing inactive data for long-term retention and
access. It's distinct from data backups, which primarily focus on disaster recovery.

4.4.1 Purpose of Data Archival

Archiving serves a multitude of purposes for organizations. It ensures compliance with legal
regulations and industry standards that mandate the retention of specific data for a defined
period. Archived data can also provide a valuable historical record of an organization's

SCEM | Mangalore Page 5


activities, projects, and financial performance, serving as a reference point for future
decision-making and strategic planning. Additionally, archived data may hold untapped
potential for future analysis. By preserving historical information, organizations can leverage
big data analytics to uncover hidden trends, identify new opportunities, and gain a deeper
understanding of their operations over time.

4.4.2 Types of Data Archival

Data archiving methods can be broadly categorized into two main approaches based on their
accessibility: online and offline.

● Online: Fast access, ideal for frequently used data, but vulnerable and pricier.
● Offline: Super secure and durable for long-term storage, but slower retrieval.

4.5 The 3-2-1 Rule for Data Protection

The 3-2-1 rule is one of the most commonly adopted strategies for data protection, ensuring a
simple but effective way you can save your data. It focuses on the aspect of redundancy and
minimizes chances for data loss in case of the following threats: hardware failure, software
corruption, human errors, or natural disasters.

4.5.1 Components of 3-2-1 Rule

The breakdown of the 3-2-1 rule is as follows:

● 3 Copies: Keep at least three copies of your important data. That is, the original data
that is in your primary storage device and at least two backups must be prepared.
● 2 Different Storage Media: Store your backups on two different kinds of media to
increase security. This offers you some protection if one form of media fails. For
instance, you might have one copy backed up onto an external hard drive and another
copy stored in the cloud.
● 1 Offsite Location: You need to have at least one copy of your data in an offsite
location physically detached from the original. This is to keep your information safe
in the case of a disaster, such as a fire or a flood, that obliterates your main place of
storage and even the on-site backup.

SCEM | Mangalore Page 6


4.5.2 Implementation of 3-2-1 Rule

Here is the step-by-step implementation of the 3-2-1 Rule:

1. Primary Data: Store your primary data on your main computer or server.
2. Local Backup: Create a backup on a different local storage device, such as an external
hard drive or a Network Attached Storage (NAS) device.
3. Off-site Backup: Create a second backup on a remote location, such as a cloud
storage service or an external hard drive kept at a different physical location.

By following the 3-2-1 rule, you ensure that your data is protected against a wide range of
potential data loss scenarios, including hardware failures, accidental deletions, and disasters.

4.6 Overview of Popular Backup Solutions

There are several popular backup solutions available that cater to different needs and
preferences, ranging from local storage options to cloud-based services. Local backup
solutions like Acronis True Image and Carbonite offer comprehensive tools for creating
backups on external hard drives or Network Attached Storage (NAS) devices, providing fast
recovery times and greater control over data security. Cloud-based services such as
Backblaze, Dropbox, Google Drive, and Microsoft OneDrive are favored for their
convenience and accessibility, automatically syncing files to remote servers and enabling
access from anywhere with an internet connection. Hybrid solutions, such as IDrive and
CrashPlan, combine the best of both worlds, allowing users to back up data to both local
devices and cloud storage, ensuring robust data protection and redundancy. For enterprise-
level needs, solutions like Veeam and CommVault offer advanced features tailored to large-
scale environments, including comprehensive data management, disaster recovery, and
business continuity planning. Each of these solutions varies in terms of features, pricing, and
ease of use, so it's important to evaluate them based on your specific requirements, whether
for personal, small business, or enterprise use.

Cohesity

SCEM | Mangalore Page 7


Overview: Cohesity provides a modern, comprehensive data management platform that
simplifies backup, recovery, and data management across on-premises and cloud
environments.

Key Features:

● Unified Management: A centralized platform for managing data across all locations
and storage tiers.
● Global Deduplication: Efficient storage usage through deduplication across all data
sources.
● Cloud Integration: Seamless integration with major cloud providers for backup and
archival.
● Ease of Use: Intuitive user interface and streamlined processes for managing data
protection.
● Distributed System: Scalable architecture designed for high availability and
performance.

Dell EMC

Overview: Dell EMC offers a range of robust data protection and storage solutions designed
for enterprises, ensuring data integrity and availability.

Key Features:

● Data Domain: Reliable, high-performance deduplication storage systems for backup


and recovery.
● Use Case: Ideal for enterprise environments requiring scalable, reliable data
protection solutions.

Veritas

Overview: Veritas is a leading provider of data protection and management solutions,


offering a suite of tools for backup, recovery, and data availability.

Key Features:

SCEM | Mangalore Page 8


● NetBackup: Comprehensive enterprise backup solution with support for various
environments and workloads.
● Backup Exec: Scalable backup and recovery software tailored for small to medium-
sized businesses.
● Flex Application: Flexible, scalable data protection solution that can be deployed in
various configurations.
● Use Case: Suitable for businesses of all sizes needing reliable, versatile data
protection and management solutions.

Chapter 5

Presentation and Summarization on Remote Replication


5.1 Introduction

Remote replication is a crucial technique in data management and disaster recovery, ensuring
data redundancy and availability across geographically dispersed locations. It involves the
process of copying and synchronizing data from one location (primary site) to a remote
location (secondary site). This method enhances data protection by maintaining an up-to-date
copy of critical data in a different physical location, mitigating risks associated with
hardware failures, natural disasters, or other localized disruptions. It has

5.2 Modes of Remote Replication

Remote replication ensures data protection by copying data from a primary site to a remote
site. The two basic modes are synchronous and asynchronous replication.

Synchronous Replication: In synchronous replication, each write operation must be


committed to both the source and the remote replica before acknowledging the write as
complete to the host. This process ensures that the data on the source and the replica is
always identical, with writes being transmitted in the exact order received, maintaining data
consistency and write ordering. If the source site fails, synchronous replication provides a

SCEM | Mangalore Page 9


zero or near-zero recovery point objective (RPO), meaning minimal to no data loss.
However, this mode increases application response time because the write must be confirmed
by both sites before the host receives an acknowledgment. The impact on response time
depends on the distance between sites, network bandwidth, and quality of service.
Synchronous replication is typically deployed for distances less than 200 km (125 miles) due
to these latency issues.

Asynchronous Replication: In asynchronous replication, writes are immediately


acknowledged to the host once they are committed to the source. The data is then buffered
and transmitted to the remote site later. This approach eliminates the impact on application
response time, making it suitable for deployment over much greater distances, ranging from
several hundred to several thousand kilometers. The required bandwidth can be provisioned
based on the average write workload, with buffering used during times of insufficient
bandwidth. Asynchronous replication provides a nonzero RPO because the remote site data
lags behind the source data by at least the buffer size. This method can also optimize
bandwidth usage by only transmitting the final version of data if the same location is written
multiple times in the buffer.

SCEM | Mangalore Page 10


5.3 Remote Replication Technologies

There are three types of remote replication technologies: host-based, where replication is
managed by software on the server level; storage array-based, integrated into the storage
system for efficient data transfer; and network-based, utilizing network protocols for
replication between sites, ensuring data redundancy and disaster recovery across
geographically dispersed locations.

5.3.1 Host-Based Remote Replication

Host-based remote replication uses the host's resources to perform and manage replication
tasks. There are two main approaches:

● LVM-Based Remote Replication: Managed at the volume group level, it involves


transmitting writes from source volumes to a remote host via Logical Volume
Manager (LVM). Initial synchronization between identical volume groups and logical
volumes can be done through backup or IP replication. It supports both synchronous
and asynchronous modes without needing specialized hardware.

● Host-Based Log Shipping: This host-based replication technology is commonly used


in databases. Transactions to the source database are captured in logs and periodically
transmitted to a remote host, where they are applied to a standby database. This
ensures data consistency with a finite Recovery Point Objective (RPO), utilizing low
network bandwidth by transmitting log files at regular intervals.

SCEM | Mangalore Page 11


5.3.2 Storage Array-Based Remote Replication

Storage array-based remote replication automates data backups between two storage devices,
managed by the storage system's intelligence and not the servers themselves. This offloads
tasks from the servers, allowing them to focus on running applications more efficiently.

There are three primary methods for replication, each with its strengths and considerations:

● Synchronous Replication: This mode prioritizes data consistency. Every write


operation is confirmed as completed on both the source and target storage device
before acknowledging the write back to the server. This guarantees minimal data loss
but can introduce performance slowdowns due to the confirmation wait time.

● Asynchronous Replication: This method prioritizes speed. Writes are acknowledged


to the server immediately after being received by the source storage device. The data
is then copied to the target storage device later. This offers faster performance but
comes with a slight risk of data loss if a disaster hits before the copy is complete. The

SCEM | Mangalore Page 12


recovery point objective (RPO), which reflects the potential amount of data loss, is
greater than zero with asynchronous replication.

● Disk-Buffered Replication: This mode balances performance and bandwidth usage. It


creates a temporary local copy of the source data first, and then transmits that copy to
the target storage device. This approach offers faster performance compared to
synchronous replication and requires less network bandwidth. However, the RPO is
typically several hours, as there's a lag between creating the local copy and
transmitting it to the target.

Choosing the optimal replication mode depends on your specific needs. If data consistency is
paramount and minimal data loss is unacceptable, synchronous replication is the way to go. If
prioritizing performance and minimizing downtime is critical, asynchronous replication
might be a better fit. Disk-buffered replication offers a compromise between speed,
bandwidth usage, and acceptable data loss.

5.3.3 Network-Based Remote Replication


SCEM | Mangalore Page 13
Network-based remote replication offers a distinct approach to data protection compared to
storage array-based methods. One prominent technology within this realm is Continuous
Data Protection (CDP) for remote replication. CDP empowers granular recovery by enabling
restoration to any point in time.

Write Splitting: Data destined for the storage device is strategically duplicated. One copy is
directed to the storage itself, ensuring ongoing data accessibility. Simultaneously, another
copy is forwarded to a dedicated CDP appliance.

Local and Remote CDP Appliances: Strategically positioned CDP appliances, both at the
source and target, orchestrate the replication process. These appliances act as intelligent
intermediaries, facilitating efficient data transfer and maintaining data integrity.

Network-based replication offers flexibility in terms of data consistency and performance


through asynchronous and synchronous modes:

● Asynchronous Replication: This mode prioritizes speed and efficiency. Writes are
accumulated, optimized to minimize redundant data transfer, and then periodically
transmitted to the remote CDP appliance. While offering faster performance,
asynchronous replication introduces a minimal risk of data loss if a disaster strikes
before the latest data is transferred.
● Synchronous Replication: This mode prioritizes data consistency. Every write
operation is acknowledged as completed on both the source and target storage devices
before the server is notified that the write has been successfully replicated. This
approach guarantees minimal data loss but can introduce performance slowdowns due
to the confirmation wait time.

SCEM | Mangalore Page 14


The remote CDP appliance acts as a data repository, storing the received writes in a
designated journal. At predefined intervals, the appliance efficiently transfers data from the
journal to the target storage device, ensuring a complete and recoverable backup copy.

For geographically dispersed deployments, specialized network technologies like Dense


Wavelength Division Multiplexing (DWDM) or Synchronous Optical Network (SONET) can
be implemented to facilitate efficient and reliable data transfer over extended distances.

SCEM | Mangalore Page 15

You might also like