ISM Body-Report Format
ISM Body-Report Format
Acknowledgement ii
Table of Contents iv
List of Figures v
List of Tables v
References Page No
Chapter 1
Case Study
1.1 Introduction
The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.
Chapter 2
The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.
The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.
Chapter 4
The session detailed the important function of information assets data protection strategies.
Three basic ways of assuring security and availability were considered: backup, replication,
and archival. Emphasis was on the importance of these techniques that go into the prevention
of a wide range of threats of hardware failure, software malfunction, human error, or
intrusion. We also discussed in this session that, through these methods, disaster recovery
was made feasible as they would help in the recovery of the critical data affected because of
any outage or system disruption. Finally, it carried on the session to discover what data
archiving means in terms of long-term access and compliance with information.
Data backup refers to creating a copy of your data and storing it in a separate location. Its
purpose is to ensure you can recover lost information due to accidents, hardware failures, or
cyberattacks. There are three main types: full backups copy everything at once, incremental
backups only copy what's changed since the last backup, and (though not very common)
cumulative backups combine all incremental backups into a single file. Backups offer peace
of mind and data security but can require additional storage space and take time to set up and
maintain.
The backup process can be initiated by either the server or client software. This triggers the
creation of backup copies based on the defined schedule. Backups can be configured to run
automatically at specific times or based on events. Backup clients communicate with the
backup server to identify data needing backup, transfer files, and verify successful
completion. The server manages these processes and coordinates with storage nodes for
secure data placement.
The restore process involves retrieving lost or corrupted data from backups. Users initiate a
restore request, specifying the desired files or system state. The backup server locates the
necessary data in the chosen storage media and transfers it back to the designated location.
In Backup Media, different options offer varying tradeoffs. Tape provides high capacity and
long archival life but slow access. HDDs are cost-effective but vulnerable to damage. SSDs
are faster but pricier. Cloud storage offers scalability and remote access at an ongoing cost.
NAS is user-friendly but requires network connectivity. Optical disks are durable for long-
term storage but have limited capacity and slower access.
Data replication is the process of creating and maintaining identical copies of your data in
multiple locations. This redundancy serves several critical purposes. Primarily, it enhances
data availability by ensuring information remains accessible even if a storage device or
server fails. Replication also improves performance by allowing geographically dispersed
users to access data from the closest copy, reducing latency. Furthermore, it can facilitate
disaster recovery by providing a readily available source for data restoration in case of major
outages.
The purpose of data replication extends beyond basic data redundancy. It offers a multi-
faceted approach to data management:
Data replication can be categorized into two main approaches based on the timing of data
updates on the replica: synchronous and asynchronous.
● Writes are acknowledged: Data is written to the primary storage and the replica
simultaneously. Both systems acknowledge successful completion before the
operation is considered finished.
● High Data Consistency: This guarantees that the primary and secondary copies are
always identical, offering zero data loss and minimal lag between updates.
● Performance Impact: Due to the wait for acknowledgment, synchronous replication
can introduce latency, especially for geographically dispersed locations or large data
volumes.
● Suitable for mission-critical applications requiring high data consistency and
immediate availability, even in geographically close locations.
Asynchronous Replication:
● Eventual Consistency: Data is written to the primary storage first, and then replicated
to the secondary at a later time. This allows for faster writes on the primary system
but introduces a potential lag between the primary and secondary copies.
● Scalability and Performance: Asynchronous replication offers better performance for
large data transfers or geographically dispersed locations as it doesn't require waiting
for acknowledgment.
● Potential Data Loss: There's a small chance of data loss if a disaster strikes before
data is replicated, as the secondary copy might not be fully up-to-date.
● Suitable for less critical data, disaster recovery scenarios where some data loss is
acceptable, or geographically dispersed locations where latency is a concern.
Data archiving is the systematic process of storing inactive data for long-term retention and
access. It's distinct from data backups, which primarily focus on disaster recovery.
Archiving serves a multitude of purposes for organizations. It ensures compliance with legal
regulations and industry standards that mandate the retention of specific data for a defined
period. Archived data can also provide a valuable historical record of an organization's
Data archiving methods can be broadly categorized into two main approaches based on their
accessibility: online and offline.
● Online: Fast access, ideal for frequently used data, but vulnerable and pricier.
● Offline: Super secure and durable for long-term storage, but slower retrieval.
The 3-2-1 rule is one of the most commonly adopted strategies for data protection, ensuring a
simple but effective way you can save your data. It focuses on the aspect of redundancy and
minimizes chances for data loss in case of the following threats: hardware failure, software
corruption, human errors, or natural disasters.
● 3 Copies: Keep at least three copies of your important data. That is, the original data
that is in your primary storage device and at least two backups must be prepared.
● 2 Different Storage Media: Store your backups on two different kinds of media to
increase security. This offers you some protection if one form of media fails. For
instance, you might have one copy backed up onto an external hard drive and another
copy stored in the cloud.
● 1 Offsite Location: You need to have at least one copy of your data in an offsite
location physically detached from the original. This is to keep your information safe
in the case of a disaster, such as a fire or a flood, that obliterates your main place of
storage and even the on-site backup.
1. Primary Data: Store your primary data on your main computer or server.
2. Local Backup: Create a backup on a different local storage device, such as an external
hard drive or a Network Attached Storage (NAS) device.
3. Off-site Backup: Create a second backup on a remote location, such as a cloud
storage service or an external hard drive kept at a different physical location.
By following the 3-2-1 rule, you ensure that your data is protected against a wide range of
potential data loss scenarios, including hardware failures, accidental deletions, and disasters.
There are several popular backup solutions available that cater to different needs and
preferences, ranging from local storage options to cloud-based services. Local backup
solutions like Acronis True Image and Carbonite offer comprehensive tools for creating
backups on external hard drives or Network Attached Storage (NAS) devices, providing fast
recovery times and greater control over data security. Cloud-based services such as
Backblaze, Dropbox, Google Drive, and Microsoft OneDrive are favored for their
convenience and accessibility, automatically syncing files to remote servers and enabling
access from anywhere with an internet connection. Hybrid solutions, such as IDrive and
CrashPlan, combine the best of both worlds, allowing users to back up data to both local
devices and cloud storage, ensuring robust data protection and redundancy. For enterprise-
level needs, solutions like Veeam and CommVault offer advanced features tailored to large-
scale environments, including comprehensive data management, disaster recovery, and
business continuity planning. Each of these solutions varies in terms of features, pricing, and
ease of use, so it's important to evaluate them based on your specific requirements, whether
for personal, small business, or enterprise use.
Cohesity
Key Features:
● Unified Management: A centralized platform for managing data across all locations
and storage tiers.
● Global Deduplication: Efficient storage usage through deduplication across all data
sources.
● Cloud Integration: Seamless integration with major cloud providers for backup and
archival.
● Ease of Use: Intuitive user interface and streamlined processes for managing data
protection.
● Distributed System: Scalable architecture designed for high availability and
performance.
Dell EMC
Overview: Dell EMC offers a range of robust data protection and storage solutions designed
for enterprises, ensuring data integrity and availability.
Key Features:
Veritas
Key Features:
Chapter 5
Remote replication is a crucial technique in data management and disaster recovery, ensuring
data redundancy and availability across geographically dispersed locations. It involves the
process of copying and synchronizing data from one location (primary site) to a remote
location (secondary site). This method enhances data protection by maintaining an up-to-date
copy of critical data in a different physical location, mitigating risks associated with
hardware failures, natural disasters, or other localized disruptions. It has
Remote replication ensures data protection by copying data from a primary site to a remote
site. The two basic modes are synchronous and asynchronous replication.
There are three types of remote replication technologies: host-based, where replication is
managed by software on the server level; storage array-based, integrated into the storage
system for efficient data transfer; and network-based, utilizing network protocols for
replication between sites, ensuring data redundancy and disaster recovery across
geographically dispersed locations.
Host-based remote replication uses the host's resources to perform and manage replication
tasks. There are two main approaches:
Storage array-based remote replication automates data backups between two storage devices,
managed by the storage system's intelligence and not the servers themselves. This offloads
tasks from the servers, allowing them to focus on running applications more efficiently.
There are three primary methods for replication, each with its strengths and considerations:
Choosing the optimal replication mode depends on your specific needs. If data consistency is
paramount and minimal data loss is unacceptable, synchronous replication is the way to go. If
prioritizing performance and minimizing downtime is critical, asynchronous replication
might be a better fit. Disk-buffered replication offers a compromise between speed,
bandwidth usage, and acceptable data loss.
Write Splitting: Data destined for the storage device is strategically duplicated. One copy is
directed to the storage itself, ensuring ongoing data accessibility. Simultaneously, another
copy is forwarded to a dedicated CDP appliance.
Local and Remote CDP Appliances: Strategically positioned CDP appliances, both at the
source and target, orchestrate the replication process. These appliances act as intelligent
intermediaries, facilitating efficient data transfer and maintaining data integrity.
● Asynchronous Replication: This mode prioritizes speed and efficiency. Writes are
accumulated, optimized to minimize redundant data transfer, and then periodically
transmitted to the remote CDP appliance. While offering faster performance,
asynchronous replication introduces a minimal risk of data loss if a disaster strikes
before the latest data is transferred.
● Synchronous Replication: This mode prioritizes data consistency. Every write
operation is acknowledged as completed on both the source and target storage devices
before the server is notified that the write has been successfully replicated. This
approach guarantees minimal data loss but can introduce performance slowdowns due
to the confirmation wait time.