Module+4+Storage+Efficiency+-+Participant+Guide
Module+4+Storage+Efficiency+-+Participant+Guide
EFFICIENCY
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Module 4 Storage Efficiency
FAST VP .................................................................................................................... 48
Appendix ................................................................................................. 83
Data Reduction
In general, Data Reduction reduces the amount of physical storage that is required
to save a dataset. Data Reduction helps reduce the Total Cost of Ownership (TCO)
of a Dell Unity XT storage system.
Thin LUNs within Consistency Groups Thin VMware VMFS and NFS Datastores
Data Reduction can also be enabled on Block and File storage resources
participating in replication sessions. The source and destination storage resources
in a replication session are independent. Data Reduction with or without the
Advanced Deduplication option can be enabled or disabled separately on the
source and destination resource.
Considerations
• Hybrid pools created on Unity XT model systems support Data Reduction with
and without Advanced Deduplication enabled on Traditional or Dynamic pools.
• To support Data Reduction, the pool must contain a flash tier and the total
usable capacity of the flash tier must meet or exceed 10% of the total pool
capacity.
− Data Reduction can be enabled on an existing resource if the flash capacity
requirement is met.
• The Advanced Deduplication switch is available only on:
For Data Reduction enabled storage resources, the Data Reduction process occurs
during the System Cache proactive cleaning operations or when System Cache is
flushing cache pages to the drives within a Pool. The data in this scenario may be
new to the storage resource, or the data may be an update to existing blocks of
data currently residing on disk.
In either case, the Data Reduction algorithm occurs before the data is written to the
drives within the Pool. During the Data Reduction process, multiple blocks are
aggregated together and sent through the algorithm. After determining if savings
can be achieved or data must be written to disk, space within the Pool is allocated
if needed, and the data is written to the drives.
Process:
1. System write cache sends data to the Data Reduction algorithm during
proactive cleaning or flushing.
2. Data Reduction logic determines any savings.
3. Space is allocated in the storage resource for the dataset if needed, and the
data is sent to the disk.
Data is sent to the Data Reduction algorithm during proactive cleaning or flushing
of write path data.
In the example, an 8 KB block enters the Data Reduction algorithm and Advanced
Deduplication is disabled.
• The 8 KB block is first passed through the deduplication algorithm. Within this
algorithm, the system determines if the block consists entirely of zeros, or
matches a known pattern within the system.
• If a pattern is detected, the private space metadata of the storage resource is
updated to include information about the pattern, along with information about
how to re-create the data block if it is accessed in the future.
• Also, when deduplication finds a pattern match, the remainder of the Data
Reduction feature is skipped for those blocks which saves system resources.
None of the 8 KB block of data is written to the Pool at this time.
• If a block was allocated previously, then the block can be freed for reuse. When
a read for the block of data is received, the metadata is reviewed, and the block
will be re-created and sent to the host.
• If a pattern is not found, the data is passed through the Compression Algorithm.
If savings are achieved, space is allocated on the Pool to accommodate the
data.
• If the data is an overwrite, it may be written to the original location if it is the
same size as before.
The example displays the behavior of the Data Reduction algorithm when
Advanced Deduplication is disabled.
Deduplication algorithm
looks for zeros and
8 KB block common patterns Update private space
Pattern Detected metadata to include pattern
reference
End
Each 8 KB block receives a fingerprint, which is compared to the fingerprints for the
storage resource. If a matching fingerprint is found, deduplication occurs. The
private space within the resource is updated to include a reference to the block of
data residing on disk. No data is written to disk at this time.
Deduplication algorithm
Fingerprint Calculation
Compare
Fingerprint Compare
Match
Fingerprint
Update private space
No Match
Cache
to include data
resource
Compression algorithm
Update
End
Through machine learning and statistics, the fingerprint cache determines which
fingerprints to keep, and which ones to replace with new fingerprints. The
fingerprint cache algorithm learns which resources have high deduplication rates
and allows those resources to consume more fingerprint locations.
• If no fingerprint match is detected, the blocks enter the compression algorithm.
• If savings can be achieved, space is allocated within the Pool which matches
the compressed size of the data, the data is compressed, and the data is written
to the Pool. When Advanced Deduplication is enabled, the fingerprint for the
block of data is also stored with the compressed data on disk.
• The fingerprint cache is then updated to include the fingerprint for the new data.
Read Operation
Data Reduction can be enabled on resources that are built from hybrid flash pools
within the Dell Unity XT 380, 480, 680 and 880 systems.
The properties page of a multi-tiered pool that includes SAS Flash 3 and SAS drives
Flash Capacity
To support Data Reduction, the proportion of flash capacity on the pool must be
equal to or exceed 10% of the total capacity of the pool.
This Flash Percent (%) value allows enabling data reduction for resources that are built from the
pool
Storage Resource
Enabling Data Reduction is only possible if the pool flash percent value
requirements are met.
• For pools with a lower flash capacity, the feature is unavailable and a message
is displayed. Go here to see an example.
• Advanced Deduplication is also supported for the data reduction enabled
resources.
Both Data Reduction and Advanced Deduplication can be enabled for a LUN built from Pool 0
The proportion of flash capacity utilization can also be verified from the Details
pane of a selected pool.
Pools page with selected pool showing the flash capacity utilization on the details pane
In the example, Pool 1 is selected and the details pane shows that the pool has
57% of flash capacity utilization.
Add the Data Reduction and Advanced Deduplication columns to the resources
page, to verify which resources have the features enabled.
LUNs page showing the Data Reduction and Advanced Deduplication columns
In the example, all the LUNs created on the dynamic pool Pool 0 are configured
with data reduction and advanced deduplication.
To monitor the Data Reduction Savings, select the storage resource and open the
properties page. The savings are reported in GBs, percent savings, and as a ratio
on the General tab.
LUN properties showing the Data Reduction Savings on the General tab
In the example, the properties of LUN-1 show a data reduction savings of 38.5 GB.
The percentage that is saved and ratio reflect the savings on the storage resource.
To remove Data Reduction savings for block resources, use the Move operation.
For file resources, since there is no Move operation, users can use host-based
migration or replication. For example, you can asynchronously replicate a file
system to another file system within the same pool using UEMCLI commands.
Data Reduction stops for new writes when sufficient resources are not available
and resumes automatically after enough resources are available.
Enable DR by
selecting the box. the
Advanced
Deduplication box now
becomes available for
selection
No DR on the LUN
To review which Consistency Groups contain Data Reduction enabled LUNs, select
the Consistency Group tab, which is found on the Block page.
On this page, columns that are named Data Reduction and Advanced
Deduplication can be added to the current view.
• Click the Gear Icon and select Data Reduction or Advanced Deduplication
under Column.
• The Data Reduction and Advanced Deduplication columns have three potential
entries, No, Yes, and Mixed.
− No is displayed if none of the LUNs within the Consistency Group has the
option enabled.
− Yes is displayed if all LUNs within the Consistency Group have the option
enabled.
− Mixed is displayed when the Consistency Group has some LUNs with Data
Reduction enabled and other LUNs with Data Reduction disabled.
The Local LUN Move feature, also known as Move, provides native support for
moving LUNs and VMFS Datastores online between pools or within the same pool.
This ability allows for manual control over load balancing and rebalancing of data
between pools.
Local LUN Move leverages Transparent Data Transfer (TDX) technology, a multi-
threaded, data copy engine. Local LUN Move can also be leveraged to migrate a
Block resource's data to or from a resource with Data Reduction or Advanced
Deduplication enabled.
If Advanced Deduplication is supported and enabled, the data also passes through
the Advanced Deduplication algorithm. This allows space savings to be achieved
during the migration.
When migrating to a resource with Data Reduction disabled, all space savings that
are achieved on the source are removed during the migration.
Launch Wizard
To add drives from different tiers to an All-Flash pool (with Data Reduction enabled
resources), select the pool and Expand Pool.
Details pane of a dynamic pool showing the Flash Percent, and number of storage resources built
from it
In the example, the All-Flash pool Pool 2 has four LUNs with Data Reduction and
Advanced Deduplication enabled.
Select Tiers
Select the storage tiers with available drives to expand the pool, and optionally
change the hot spare capacity for new added tiers.
The Performance and Capacity tiers are selected. The Extreme Performance
tier cannot be selected since there are no unused drives.
Select Drives
Select the amount of storage from each tier to add to the All-Flash pool. The
number of drives must comply with the RAID width plus the hot spare capacity that
is defined for the tier.
In the example, six SAS drives comply with the RAID 5 (4+1) plus one hot spare
requirement. Seven NL-SAS drives comply with the RAID 6 (4+2) plus one hot
spare requirement.
If the expansion does not cause an increase in the spare space the pool requires,
the new free space is made available for use. When extra drives increase the spare
space requirement, a portion of the space being added is reserved. The reserved
space is equal to the size of one or two drives per 32.
Summary
Verify the proposed configuration with the expanded drives. Select Finish to start
the expansion process.
In the example, the expansion process adds 13 drives to the All-Flash pool and
converts it to a multi-tiered pool.
Pool Expanded
Verify the pool conversion on the details pane. The number of tiers and drives
increased. Observe that the flash percent supports the Data Reduction.
Details pane of a selected pool showing the Flash Percent, number of tiers and drives
In the example, the expansion included two new tiers and added 13 new drives to
the pool. The flash percent is over 10% which ensures that Data Reduction is
supported.
If the pool contains Data Reduction enabled resources and the resulting flash
percent would be below 10%, the expansion would not be allowed.
If the additional capacity drops the Flash Percent value below 10%, the expansion
of a dynamic pool with Data Reduction enabled resources is blocked.
• To support Data Reduction, the proportion of flash capacity must be equal to or
exceed 10% of the total capacity of the pool.
• If the requirement is not met, the wizard displays a warning message when
trying to advance to the next step.
In the example, the Flash Percent capacity utilization of Pool 0 is 19%, and the
addition of 14 NL-SAS drives reduces the value below the 10% requirement.
To conclude the expansion, select a number of drives that keep the flash percent
capacity utilization within the Data Reduction requirements.
In Hybrid pools, the metadata has priority over user data for placement on the
fastest drives.
• Algorithms ensure that metadata go to the tier that provides the fastest access.
Usually this is the Extreme Performance (SAS Flash) tier.
• If necessary, user data is moved to the next available tier (Performance or
Capacity), to give space to metadata created on the pool.
The system monitors how much metadata is created as the resources grow, and
the pool space is consumed.
• The system automatically estimates how much metadata can be created based
on the free capacity.
− The estimate considers the amount of metadata that is generated, the pool
usable capacity, and the free space within each tier.
FAST VP tries to relocate user data out of the Flash tier to free space for metadata.
In this situation, the pool status changes to OK, Needs Attention (seen from the
Pools page, and the pool properties General tab). A warning informs the
administrator to increase the amount of flash capacity within the pool to address
the issue.
In the example, the system identifies metadata blocks to move from the
Performance to the Extreme Performance tier.
To review the status of Data Reduction and Advanced Deduplication on each of the
LUNs created on the system, go to the Block page in Unisphere. The page can be
accessed by selecting Block under Storage in the left pane.
To add these and other columns to the view, click the Gear Icon in the upper right
portion of the LUNs tab and select the columns to add under the Columns option.
Data Reduction provides savings information at many different levels within the
system, and in many different formats.
• Savings information is provided at the individual storage resource, pool, and
system levels.
• Savings information is reported in GBs, percent savings, and as a ratio.
• Total GBs saved includes the savings due to Data Reduction on the storage
resource, Advanced Deduplication savings, and savings which are realized on
any Snapshots and Thin Clones taken of the resource.
• The percentage that is saved and the ratio reflect the savings within the storage
resource itself. All savings information is aggregated and then displayed at the
Pool level and System level.
Space savings information in the three formats is available within the Properties
window of the storage resource.
For LUNs, you must either access the Properties page from the Block page, or on
the LUN tab from within the Consistency Group Properties window.
Shown is the total GBs saved, which includes savings within data used by
Snapshots and Thin Clones of the storage resource. Also shown is the % saved
and the Data Reduction ratio, which both reflect the savings within the storage
resource. File System and VMware VMFS Datastores display the same
parameters.
Data Reduction savings are shown on the General tab within the LUN Properties
Window.
Data Reduction information is also aggregated at the Pool level on the Usage tab.
Savings are reported in the three formats, including the GBs saved, % savings, and
ratio.
• The GBs savings reflect the total amount of space saved due to Data Reduction
on storage resources and their Snapshots and Thin Clones.
• The % saved and the Ratio reflect the average space that is saved across all
Data Reduction enabled storage resources.
System level Data Reduction Savings information is displayed within the System
Efficiency view block that is found on the system Dashboard page. If the view
block is not shown on your system, you can add it by selecting the Main tab,
clicking Customize, and adding the view block.
The system level aggregates all savings across the entire system and displays
them in the three formats available, GBs saved, % saved, and ratio.
• For the GBs saved, this value is the total amount of space saved due to Data
Reduction, along with savings achieved by Snapshots and Thin Clones of Data
Reduction enabled storage resources.
• The % savings and ratio are the average savings that are achieved across all
data reduction enabled storage resources.
Overview
The space reporting updates affect the System, Pool, and Storage Resource
values. Users can use the formulas, displayed here to calculate, and verify the
Data Reduction Savings percentage and ratio for the System, Pools, and Storage
Resources.
The system level aggregates all savings across the entire system and displays
them in the three formats available, GBs saved, % saved, and ratio.
• For the GBs saved, this value is the total amount of space saved due to Data
Reduction, along with savings achieved by Snapshots and Thin Clones of Data
Reduction enabled storage resources.
• The % savings and ratio are the average savings that are achieved across all
Data Reduction enabled storage resources.
Example
The example shows the calculation for Data Reduction savings ratio on a LUN.
44.7 GB + 45.7 GB
=:1
44.7 GB
90.4
= 2.02:1
44.7 GB
Example
The example displays the formula for calculating the Data Reduction percentage
savings.
45.7 GB
*100
45.7 GB +44.7 GB
45.7 GB
*100 = 51%
90.4 GB
Overview
Storage resources using Data Reduction can be replicated using any supported
replication software. Native Synchronous Block Replication or Native
Asynchronous Replication to any supported destination is supported.
• Replication can occur to or from a Dell Unity XT that does not support Data
Reduction.
• Data Reduction can be enabled or disabled on source or destination
independently.
As data is migrated from the source VNX system to the Dell Unity XT system, it
passes through the Data Reduction algorithm as it is written to the Pool.
Text
Text
File Import
Data Reduction and Advanced Deduplication with Native File and Block Import
FAST VP
FAST VP Overview
When reviewing the access patterns for data within a system, most access patterns
show a basic trend. Typically, the data is most heavily accessed near the time it
was created, and the activity level decreases as the data ages. This trending is
also seen as the lifecycle of the data. Dell EMC Unity Fully Automated Storage
Tiering for Virtual Pools - FAST VP monitors the data access patterns within pools
on the system.
Flash
drives
SAS
drives
NL-SAS
drives
FAST VP classifies drives into three categories, called tiers. These tiers are:
• Extreme Performance Tier – Comprised of Flash drives
Dell EMC Unity has a unified approach to create storage resources on the system.
Block LUNs, file systems, and the VMware datastores can all exist within a single
pool, and can all benefit from using FAST VP. In system configurations with
minimal amounts of Flash, FAST VP uses the Flash drives for active data,
regardless of the resource type. For efficiency, FAST VP uses low cost spinning
drives for less active data. Access patterns for all data within a pool are compared
against each other. The most active data is placed on the highest performing drives
according to the storage resource’s tiering policy. Tiering policies are explained
later in this document.
Tiering Policies
FAST VP Tiering policies determine how the data relocation takes place within the
storage pool. The available FAST VP policies are displayed here.
The Tier label is used to describe the various categories of media used within a
pool. In a physical system, the tier directly relates to the drive types used within the
pool. The available tiers are Extreme Performance Tier using Flash drives, the
Performance Tier using SAS drives, and the Capacity Tier using NL-SAS drives.
On a Dell EMC UnityVSA system, a storage tier of the virtual drives must be
created manually. The drives should match the underlying characteristics of the
virtual disk.
The table shows the available tiering policies with its description, and the initial tier
placement which corresponds to a selected policy.
Users can select the RAID protection for each one of the tiers being configured
when creating a pool. A single RAID protection is selected for each tier, and after
the RAID configuration is selected, and the pool is created, it cannot be changed.
Only when you expand the pool with a new drive type, you can select a RAID
protection.
This table shows the supported RAID types and drive configurations.
When considering a RAID configuration which includes many drives - 12+1, 12+2,
14+2, consider the tradeoffs that the larger drive counts contain. Using larger drive
sizes can lead to longer rebuild times and possible faulted domains.
FAST VP Management
The user can change the system-level data relocation configuration using the
Global settings window.
Select the Settings option on the top of the Unisphere page to open the Settings
window.
Storage Pool
FAST VP relocation at the pool level can be also verified from the pool properties
window.
In Unisphere, select a pool and click the edit icon to open its properties window.
Then select the FAST VP tab.
You also have the option to manually start a data relocation by clicking the Start
Relocation button. To modify the FAST VP settings, click the Manage FAST VP
system settings link in the upper right side of the window.
At the storage resource level, the user can change the tiering policy for the data
relocation.
In Unisphere, select the block or file resource and click the pencil icon to open its
properties window. Then select the FAST VP tab.
From this page, it is possible to edit the tiering policy for the data relocation.
The example shows the properties for LUN_2. The FAST VP page displays the
information of the tiers that are used for data distribution.
Thin Clones
A Thin Clone is a read/write copy of a thin block storage resource that shares
blocks with the parent resource. Thin Clones created from a thin LUN, Consistency
Group, or the VMware VMFS datastore form a hierarchy.
A Base LUN family is the combination of the Base LUN, and all its derivative Thin
Clones and snapshots. The original or production LUN for a set of derivative
snapshots, and Thin Clones is called a Base LUN. The Base LUN family includes
snapshots and Thin Clones based on child snapshots of the storage resource or its
Thin Clones.
Data available on the source snapshot is immediately available to the Thin Clone.
The Thin Clone references the source snapshot for this data. Data resulting from
changes to the Thin Clone after its creation is stored on the Thin Clone.
A snapshot of the LUN, Consistency Group, or VMFS datastore that is used for the
Thin Clone create and refresh operations is called a source snapshot. The original
parent resource is the original parent datastore or Thin Clone for the snapshot on
which the Thin Clone is based.
Thin Clones are created from attached read-only or unattached snapshots with no
auto-deletion policy and no expiration policy set. Thin Clones are supported on all
Dell Unity models including Dell UnityVSA.
In the example, the Base LUN family for LUN1 includes all the snapshots and Thin
Clones that are displayed in the diagram.
LUN 1
RO Thin
Clone 1
Snap 1
Application 1
RO Thin
Clone 2
Snap 2 Snap 3
Application 2
Snap 5
Snap 4
A thin clone is displayed in the LUNs page. The page shows the details and
properties for the clone.
You can expand a thin clone by selecting the clone, then selecting the View/Edit
option. If a thin clone is created from a 100 GB Base LUN, the size of the thin clone
can be later expanded.
All data services remain available on the parent resource after the creation of the
thin clone. Changes to the thin clone do not affect the source snapshot, because
the source snapshot is read-only.
With thin clones, users can make space-efficient copies of the production
environment. Thin clones are based on pointer-based technology, which means a
thin clone does not consume much space from the storage pool. The thin clones
share the space with the base resource, rather than allocate a copy of the source
data for itself, which provide benefits to the user.
Users can also apply data services to thin clones. Data services include; host I/O
limits, host access configuration, manual or scheduled snapshots, and replication.
With the thin clone replication, a full clone is created on the target side which is an
independent copy of the source LUN.
A maximum of 16 thin clones per Base LUN can be created. The combination of
snapshots and thin clones cannot exceed 256.
Thin Clone operations Users can create, refresh, view, modify, expand, and
delete a thin clone.
Data Services Most LUN data services can be applied to thin clones:
host I/O limits, host access configuration,
manual/scheduled snapshots, replication.
The use of Thin Clones is beneficial for the types of activities that are explained
here.
Thin Clones allow development and test personnel to work with real workloads and
use all data services that are associated with production storage resources without
interfering with production.
For parallel processing applications which span multiple servers the user can use
multiple Thin Clones of a single production dataset to achieve results more quickly.
An administrator can meet defined SLAs by using Thin Clones to maintain hot
backup copies of production systems. If there is corruption of the production
dataset, the user can immediately resume the read/write workload by using the
Thin Clones.
Thin Clones can also be used to build and deploy templates for identical or near-
identical environments.
Development and Work with real workloads and all data services that are
test environments associated with production storage with no effect to
production.
Any-Any Refresh From base LUN only Yes, any Thin Clone can
be refreshed from any
snapshot.
The Create operation uses a Base LUN to build the set of derivative snapshots,
and Thin Clones.
Refreshing a Thin Clone updates the Thin Clone’s data with data from a different
source snapshot. The new source snapshot must be related to the base LUN for
the existing Thin Clone. In addition, the snapshot must be read-only, and it must
have expiration policy or automatic deletion disabled.
This example shows that the user is refreshing Thin Clone3 with the contents of
source Snap1.
After the Thin Clone is refreshed, the existing data is removed and replaced with
the Snap1 data. There are no changes to the data services configured in the Thin
Clone, and if the Thin Clone has derivative snapshots they remain unchanged.
In this example, the source snapshot of the Thin Clone changes. So instead of
being Snap3, the source snapshot is now Snap1.
Observe that the original parent resource does not change when a Thin Clone is
refreshed to a different source snapshot. The new source snapshot comes from the
same base LUN.
Refreshing a Base LUN updates the LUNs data with data from any eligible
snapshot in the Base LUN family including a snapshot of a Thin Clone. The new
source snapshot must be related to the Base LUN family for the existing Thin
Clone. In addition, the snapshot must be read-only, and the retention policy must
be set to no automatic deletion.
This example shows the user refreshing LUN1 with the data from Snap3. When the
LUN is refreshed, the existing data is removed from LUN1 and replaced with the
data from Snap3.
There are no changes to the data services configured on the Thin Clone. If the Thin
Clone has derivative snapshots, the snapshots remain unchanged.
This page shows the Unisphere LUNs page with the example of a Base LUN and
its respective Thin Clones.
In the example, Base_LUN1 has two Clones that are taken at different times:
• The Base_LUN1 has an allocated percentage of 63.1.
• TC1OriginalData was a clone of the original Base_LUN1 and has an allocation
of 2.1 percent.
• TC2 AddedFiles were taken after adding files to the Base_LUN1.
For the bottom window, the Base_LUN1 has been selected and the Refresh option
is used to populate the base LUN.
In the top window, the SnapOriginalData resource has been selected. Note the
Attached and Auto-Delete options must display a No status state.
The bottom window shows that the results after the Base_LUN1 has been updated
with the SnapOriginalData snapshot. The properties of Base_LUN1 show that the
Allocated space is only 2.1% after the refresh.
Thin Clones
Dell Unity Snapshots and Thin Clones are fully supported with data reduction and
Advanced Deduplication. Snapshots and Thin Clones also benefit from the space
savings that are achieved on the source storage resource.
When writing to a Snapshot or Thin Clone, the I/O is subject to the same data
efficiency mechanism as the storage resource. Which efficiency algorithms are
applied depends on the Data Reduction and Advanced Deduplication settings of
the parent resource.
− Hard and soft limits set on the amount of disk space allowed for
consumption.
Dell EMC recommends that quotas are configured before the storage system
becomes active in a production environment. Quotas can be configured after a file
system is created.
Default quota settings can be configured for an environment where the same set of
limits are applied to many users.
These parameters can be configured from the Manage Quota Settings window:
• Quota policy: File size [default] or Blocks
• Soft limit
• Hard limit
• Grace period
The soft limit is a capacity threshold. When file usage exceeds the threshold, a
countdown timer begins. The timer, or grace period, continues to count down as
long as the soft limit is exceeded. However, data can still be written to the file
system. If the soft limit remains exceeded and the grace period expires, no new
data may be added to the particular directory. Users associated with the quota are
also prohibited from writing new data. When the capacity is reduced beneath the
soft limit before the grace period expires, access is allowed to the file system
The grace period can be limited by days, hours and minutes, or unlimited. When
the grace period is unlimited, data can be written to file system until the quota hard
limit is reached.
A hard limit is also set for each quota configured. When the hard limit is reached,
no new data can be added to the file system or directory. The quota must be
increased, or data must be removed from the file system before more data can be
added.
Quota Usage
File system quotas can track and report usage of a file system.
File System
Quota Usage
Quota Limit
Soft Limit
Grace Period
File System (One Day)
In this example, the user crosses the 20 GB soft limit. The storage administrator
receives an alert in Unisphere stating that the soft quota for this user has been
crossed.
The Grace Period of one day begins to count down. Users are still able to add data
to the file system. Before the expiration of the Grace Period, file system usage
must be less than the soft limit.
Grace Period
Grace Period
(One Day)
File System
Quota Usage
When the Grace Period is reached and the usage is still over the soft limit, the
system issues a warning. The storage administrator receives a notification of the
event.
The transfer of more data to the file system is interrupted until file system usage is
less than the allowed soft limit.
Hard Limit
Grace Period
File System (One Day)
Soft Limit
Block hard quota reached / exceeded Hard Limit
(20 GB)
(25 GB)
Quota Usage
If the Grace Period has not expired and data remains being written to the file
system, eventually the Hard Limit is reached.
When the hard limit is reached, users can no longer add data to the file system and
the storage administrator receives a notification.
1. Data Reduction
a. Dell Unity XT Data Reduction provides space savings by using data
deduplication and compression.
b. Data reduction is supported on All Flash pools created on Dell Unity XT
Hybrid Flash systems or Dell Unity XT All Flash systems.
c. Data reduction is supported on thin storage resources: LUNs, LUNs within a
Consistency Group, file systems, and VMware VMFS and NFS datastores.
2. FAST VP
a. Dell Unity Fully Automated Storage Tiering for Virtual Pools (FAST VP)
monitors the data access patterns within heterogeneous pools on the
system.
b. In storage pools with Flash, SAS and NL-SAS, FAST VP uses the Flash
drives for active data, and low cost spinning drives for less active data.
c. There are four tiering policies: Start High then Auto-tier (default), Highest
Available tier, Auto Tier, Lowest Available Tier.
d. RAID levels for each tier can be selected when creating a pool. The
supported RAID levels are RAID 1/0, RAID 5 and RAID 6.
e. Data relocation can be schedule the system level, or manually started at the
storage pool level.
3. Thin Clones
a. A Thin Clone is a read/write copy of a thin block storage resource that
shares blocks with the parent resource.
– The resource is built from a snapshot of thin block storage resources:
LUNs, LUNs member of a Consistency Group, or VMFS datastores.
– Thin Clones can be created from Attached read-only or Unattached
Snapshots with no auto-deletion policy and no expiration policy.
– Thin Clones are supported on all Dell EMC Unity XT models including
Dell EMC UnityVSA.
4. File System Quotas
a. Dell Unity XT systems support file system quotas which enable storage
administrators to track and limit usage of a file system.
b. Quota limits can be designated for users, a directory tree, or users within a
quota tree.
c. Quota policy can be configure to determine usage per File Size (the default),
or Blocks.
d. The policies use hard and soft limits set on the amount of disk space
allowed for consumption.
For more information, see the Dell EMC Unity: Data Reduction,
Dell EMC Unity: FAST Technology Overview, Dell EMC Unity:
Snapshots and Thin Clones A Detailed Review, and Dell EMC
Unity: NAS Capabilities on the Dell Technologies Info Hub.
4.3 / 4.4 Data Reduction All Flash Pool 1 300 | 400 | 500 |
600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F
5.0 / 5.1 Data Reduction All Flash Pool 1 300 | 400 | 500 |
600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F
380 | 480 | 680 |
880
380F | 480F | 680F
| 880F
3The pool must contain a flash tier, and the total usable capacity of the flash tier
must meet or exceed 10% of the total pool capacity.
In the example, the Flash Percent (%) of Pool 1 does not comply with the data
reduction requirements.
Data Reduction is disabled and unavailable for any storage resource that is created
from the pool. The feature is grayed out, and a message explains the situation.
The example shows that data reduction is not available when creating a LUN on
Pool 1.
Create LUN wizard with the Data Reduction option unavailable for configuration.