Provisioning_with_EMC_VIAS
Provisioning_with_EMC_VIAS
ABSTRACT
This paper is intended to give guidance to the planning, implementation, and best
practices for using VPLEX Integrated Array Services (VIAS) and SMI-S Provider
functionality in your VPLEX environment.
2
TABLE OF CONTENTS
EXECUTIVE SUMMARY5
Business Case5
Solution Overview5
Audience5
VIAS REQUIREMENTS7
VIAS RECOMMENDATIONS7
VIAS PROVISIONING17
General Provisioning LIMITS17
Distributed Storage Views18
Creating Virtual Volumes18
Provisioning From Pools18
Provisioning From Storage Volumes22
Provisioning with the VPLEX CLI26
Provisioning requests Jobs28
Monitoring Provisioning Jobs28
Using Hierarchy Maps29
Unclaiming Storage Volumes Created with VIAS30
APPENDIX:30
Business Communication Layer Service (BOLE) 30
Log Files for VIAS30
3
VPLEX 5.3.0.00.00.10 Limitations31
4
EXECUTIVE SUMMARY
Business Case
The EMC® VPLEX™ family removes physical barriers within, across, and between data centers. VPLEX Local provides simplified
management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides data access and mobility
between two VPLEX clusters within synchronous distances. With a unique scale-up architecture, VPLEX’s advanced data caching
and distributed cache coherency provide workload resiliency, balancing, plus enables both local and remote data access with
predictable service levels.
Solution Overview
This white paper is designed to give technology decision-makers a deeper understanding of the VPLEX Integrated Array Services
in terms of the overall solution, planning, design, and features & functionality. The key technical considerations for implementing
VPLEX Integrated Array Services for provisioning in VPLEX and how to provision storage from Storage Pools provided from VMAX,
VNX, Unity, and XtremIO arrays. VIAS provisioning will create new virtual volumes on demand, with the required service levels,
and do this provisioning across multiple arrays in five easy steps.
Audience
This white paper is intended for technology architects, storage administrators, and system administrators who are responsible
for architecting, creating, managing IT environments that utilize EMC VPLEX technologies. The white paper assumes the reader is
familiar with VPLEX, VMAX, VNX, Unity and XtremIO technologies.
5
VPLEX INTEGRATED STORAGE PROVISIONING
The VPLEX Integrated Array Service (VIAS) feature enables VPLEX to provision storage for EMC VMAX, VNX, Unity and XtremIO
storage arrays directly from the VPLEX CLI, UI, and REST API. VPLEX uses Array Management Providers (AMPs) to streamline
provisioning, and allows you to provision a VPLEX virtual volume from a pool on the array.
The VIAS feature uses the Storage Management Initiative -Specification (SMI-S) provider to communicate with VMAX and VNX
arrays while using the REST provider to communicate with XtremIO and Unity arrays to enable storage provisioning. An AMP
must be installed and configured on the same storage area network (SAN) as the VPLEX, and both the VPLEX and the provider
must be configured for the same arrays. After the provider has been configured, it must be registered as an AMP with VPLEX.
This is accomplished using either the VPLEX CLI or Unisphere for VPLEX GUI. After the registration has completed, the managed
arrays are automatically detected and visible to VPLEX and provisioning of virtual volumes can be accomplished using disk
groups or storage pools visible from each array.
Provisioning in VPLEX has been streamlined and enhanced to provide a more efficient way to provision storage for EMC arrays.
Simplified provisioning uses a wizard to create new virtual volumes on demand, with the service levels needed, in five steps:
1. Select consistency groups and add storage
2. Select VPLEX protection services to apply to the production volume (none, local RAID-1, and distributed mirroring),
number of volumes and capacity per volume
3. Select the underlying storage array pools from which to draw storage
Integrated storage refers to storage created through the VPLEX Integrated Services feature. This feature requires the use of
AMPs to leverage storage pools on the array. If your EMC array configuration includes storage pools and the array is supported
for use with VPLEX, you can integrate the array with VPLEX and provision storage from pools on the array through VPLEX.
In Figure 2, the SMI-S Providers are configured to see the same array VPLEX has already been connected to. Once this
configuration is done a user can register a SMI-S provider using AMP on the VPLEX. This is done by running the CLI or GUI
6
wizard. You will provide the SMI-S Provider’s IP Address and credentials in order to complete registration. At this point the
managed arrays with its disk groups and pools for provisioning will be visible in the VPLEX UI.
VIAS REQUIREMENTS
Support for VMAX and VNX is available starting with Geosynchrony 5.3
Support for Xtremio 4.0 is available starting with GeoSynchrony 6.0.1
Support for Unity is available starting with GeoSynchrony 6.0.1
For the 5.5 and 6.0 GeoSynchrony releases, the max number of provisioning jobs that may be queued is 100. This does not
include cancelled or completed jobs. The maximum number of virtual volumes per request is 10. After one request is complete,
the next request can be initiated. During a provisioning request, it might take several minutes for the job ID to be returned. In
the GUI, the provisioning wizard will display an in progress icon until the job ID is ready. The same is applicable for the CLI, it
might take several minutes for the job ID to be displayed. This behavior has been greatly enhanced since the 5.3 and 5.4
versions of GeoSynchrony.
VIAS RECOMMENDATIONS
As a best practice, do not use the RESTful interface to poll the pools on a managed array. This can be a very expensive
resource consuming operation. EMC recommends that you limit the number of times you query pools from any of the
interfaces. (CLI, GUI, and RESTful API)
Avoid having redundant array providers managing the same arrays. It is possible to add many array providers.
Additional providers that are not used to actively manage an array should be removed.
Listing pools is expensive on the SMI-S provider. While a provisioning request is in progress, try to limit the number of
times you list pools. This request can be synchronous and can cause delays in the response time for listing pools.
7
Know the SMI-S provider health. Before executing a provisioning request, you might want to check the health of the
provider. This is best done by looking for the ECOM thread or service. There have been some cases where the SMI-S
provider take up a percentage of the CPU (25%) when there is no interaction. In such cases, the provider is having an
issue.
If there are multiple masking views, VPLEX will place the provisioned LUNs in the masking view with the least number of
devices. In other words, provisioning is deterministic in terms of the masking view with the least number of devices.
The minimum capacity that may be provisioned is 150MB.
The automatic provisioning job deletion only deletes completed and failed jobs. The jobs are automatically deleted after
seven days. In-progress and canceled jobs must still be deleted manually.
Local jobs will be executed even if the remote jobs fail on the remote cluster.
In a scenario where one cluster fails, the distributed RAID1 provisioning requests will remain queued until the other
cluster is reachable.
There are different timers that are executed at each level of a provisioning request. When a cluster fails or an AMP
becomes unreachable, a timeout occurs. In the event of a timeout, provisioning requests will fail with the message:
“futures timeout”
The provisioning job status will persist through any SMS failures or reboots.
If a provisioning job is in progress, and there is a VPN or SMS failure, the job listing on the remote cluster will not be
visible. If the jobs are executed on the remote cluster, then the provisioning job listings will remain empty until the VPN
is re-established or the remote cluster is again reachable.
For VMAX, if a LUN is provisioned from a pool that has a specific policy, then the storage group in the masking view
must also have the same policy. During provisioning, when a LUN was created from a pool with a specific policy that the
masking view configured to VPLEX did not have, the LUN was created. However, the LUN could not be added to the
masking view and was not exposed to VPLEX.
Before provisioning from storage pools, you must first register the AMP that manages the array. Your VPLEX system can
include AMPs that manage some or all of the arrays in your environment. An array must be integrated with VPLEX in
order to provision storage from pools on the array. Note that you can also provision from storage volumes as well as on
an integrated array.
Note: VIAS uses the Storage Management Initiative - Specification (SMI-S) provider to communicate with the VMX and VNX
arrays. Please validate your versioning requirements with the EMC Simple Support Matrix for VPLEX.
The factors determining these memory allocation errors are governed by the amount of physical memory on the host as well
as the number and size of the array configurations. Because it is difficult to predict how much memory is required for this
type of installation scenario, perform the following steps to prevent the above errors from occurring:
Instruct SMI-S Provider to use its own symapi database by editing the following file:
C:\Program Files\EMC\ECIM\ECOM\providers\oslsprovider.conf
9
Change the following line in oslsprovider.conf:
#OSLSProvider/com.emc.cmp.osls.se.array.StorApi.database.filename =
to:
OSLSProvider/com.emc.cmp.osls.se.array.StorApi.database.filename = c:/program
files/emc/symapi/db/symapi_smi_db.bin
Stop ECOM, the ECC Symmetrix agents, Unisphere for VMAX 1.0, and the Solutions Enabler daemons.
Remove the existing symapi_db.bin file, then restart ECOM, the ECC Symmetrix agents, Unisphere for VMAX 1.0, and
the Solutions Enabler daemons.
- VNX Arrays: SMI-S Provider discovers CLARiiON and VNX storage arrays using one of the following discovery
methods:
In-band discovery: This is conceptually the same as the Symmetrix SMI-S Provider-based discovery. If the correct
information is in place, SMI-S Provider can automatically discover a CLARiiON array or VNX storage system during its
startup. For SMI-S Provider to automatically discover locally-attached CLARiiON or VNX arrays, you must add
authorization information for its storage processors. A locally-attached storage array is one in which at least one
CLARiiON or VNX LUN is visible to the host on which the SMI-S Provider is running. Use the SYMCLI symcfg command,
as shown in the following syntax example, to add the username and password for each storage processor:
symcfg authorization add -host <HostName> -username <UserName> -password <PassWord>
Out-of-band discovery: You can add a VNX array to the SMI-S Provider without a VNX LUN being present on the host
on which SMI-S Provider is installed. Typically, this method is preferred over in-band discovery. SMI-S Provider has a
programmatic interface that provides management applications integrated with the provider the ability to discover VNX
storage arrays out of band. This discovery method only requires an IP connection to the storage array.
- Modify PATH Environment Variable:
set PATH=%PATH%;”C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin”
After connecting to the SMI-S Provider, you may issue the “dv” (display view) command to list all of the basic provider
information and each storage array that has been attached or remotely registered as seen below.
11
SMI-S FIREWALL CONSIDERATIONS FOR VIAS
This section explains the required firewall ports that must be opened for SMI-S to communicate with VPLEX VIAS. If a firewall
exists between the SMI-S Provider installation and a VNX array, port 443 must be opened in both directions in the firewall for
management communications to occur with the array. By default, the ECOM server listens on ports 5988 (for HTTP) and 5989
(for HTTPS) and must be able to communicate both directions as well as port 443. This may be done by opening Server
12
Figure 8 - Creating Inbound TCP Firewall Rules
Manager in Windows and selecting “Advanced Firewall Settings.” From there create both an inbound and outbound TCP/UDP
rules for ports 5988 and 5989 as seen in Figure-7.
There is one more change that we will need to make. Click on the Dynamic Settings link from the ECOM Administration Page and
locate the setting for SSLClientAuthentication. Select “None” and check the “Persist” box, then “Apply” these settings. This
avoids a potential problem with SSL negotiations without lowering the security level. You will need to restart the ECOM service if
you modify parameters on this page. (“net stop ecom” –and- “net start ecom”)
ADDITIONAL RESOURCES
For a list of arrays supported for use with VPLEX's Integrated Array Services, refer to the EMC VPLEX and
GeoSynchrony Simple Support Matrix document, available on EMC Online Support.
To correctly install and configure a SMI-S Provider, refer to the EMC SMI-S Provider Release Notes, available on
EMC Online Support.
To configure and use the REST provider, refer to the VPLEX Element Manager API Guide.
To use VIAS in VPLEX, you must register an array management provider, and then use the GUI provisioning wizard or the CLI to
provision virtual volumes. In our example, we will use the VPLEX GUI to show the following tasks needed to provision storage:
Register the array management provider.
Register the initiators that access VPLEX storage.
Create a storage view that include virtual volumes, initiators, and VPLEX ports
o Select the storage array and storage volumes to create virtual volumes.
14
Figure 12 - Registering the AMP
15
After the array management provider is registered, VPLEX correlates the arrays to identify which arrays it is connected to and
which arrays that VPLEX can also manage. There is no restriction on the number of SMI-S providers that can be registered. If the
credentials passed during registration are valid and the provider is IP reachable the AMP can be successfully registered. The
username, password, IP, and port all must be valid in order to register the amp. It is best to validate that the credential are
correct using the ECOM Explorer before registering a provider. This way you can ensure the SMI-S configuration is correct. If the
registration fails the AMP is not persisted and password is not saved.
Passwords are stored in the lockbox on each management server.
Passwords are not stored in any log file or in capture logs.
Register AMP
Figure 13 - Registering with
AMP SMI-S
with Wizard
SMI-S or REST
If multiple registered AMPs manage the same storage array, VPLEX will display the managed array only once under the
AMP chosen by VPLEX to do provisioning operations. It is recommended (but not required) that an array be managed by
only one registered AMP.
A managed array is an array that is both visible by SMI-S and registered with VPLEX. If an array is visible by the SMI-S provider
and not registered with VPLEX, it will not be considered managed. You do not need to un-register and re-register your SMI-S
provider to force updates when adding new arrays. If the configuration is setup correctly, the GUI will poll every 3 minutes and
automatically discover when new arrays are added.
Each managed array has a list of pools associated with it. In order to provision, you must know the pool you want to provision
from. Once AMP has been registered, the SMI-S provider will list all Disk Groups and Pools from your VMAX and VNX arrays so
it’s a good idea to create a naming convention that will make it easy to identify your tiered storage pools which are intended for
VPLEX provisioning.
VIAS PROVISIONING
Provisioning is available through both the CLI and the GUI. Please see http:support.emc.com for the most current VPLEX
Administration Guide and VPLEX CLI Guides.
17
DISTRIBUTED STORAGE VIEWS
Distributed storage refers to storage objects (consistency groups, virtual volumes, devices) that are created by using storage
from both clusters. All distributed storage objects can be conveniently accessed from the Distributed menu option in the
Provision Storage hover menu. This menu option opens the Distributed storage section in the navigation pane on the left. From
this section you can view a grouping of all distributed consistency groups, distributed virtual volumes and distributed devices.
18
Step One: For the volume to be created, select an existing consistency group or create a new one. To add the virtual volume to
an existing consistency group, select the Add storage to existing consistency group radio button and then select the consistency
group in the list. To filter the list of consistency groups by name, type the full or partial name of the consistency group in the
Filter text box. To filter by storage location, click the drop down box to the right of the Filter text box, and then select the
storage location. To create a new consistency group for the virtual volume, select the Create new consistency group radio button
Step Two: Select mirroring on the cluster where the virtual volumes will be created (optional), and then enter the quantity, size
and base name for the virtual volumes.
High Availability Options: Select the check box to enable mirroring at the cluster indicated. The option to mirror at a
cluster is disabled if you have less than two arrays in the cluster.
o For a local consistency group, select the check box to enable mirroring at the local cluster.
o For a distributed consistency group, select the appropriate check box to enable mirroring at one cluster or both
clusters, or leave the check boxes unselected to disable mirroring.
Number of Volumes: The number of volumes to create. You can create a maximum of 10 volumes at a time. Each
volume will have the same RAID geometry.
Capacity per Volume: The size of each volume. Select the size in MB, GB or TB. The minimum size is 150MB. Each
volume will have the same capacity.
Total Capacity: The total capacity of all the virtual volumes to be created.
Volume Base Name: Type a base name for the virtual volumes. A number will be added to the base name for each
volume created. For example, using the base name Test, volumes will be named Test_1, Test_2, Test_3, and so forth.
Note: If a base name has been previously used with a VIAS Provisioning job, then it cannot be re-used when you desire to provision
additional LUNs to a Storage View or set of Hosts.
19
Figure 18 - Step Three, Selecting Storage
Step Three: Select a storage pool to provide storage for the virtual volumes to be created. Click the drop down box and select
an array, and then select a pool from the list of available pools on the array. The following information is shown for each
available pool:
Pools: The name of the pool on the array.
Type: Indicates if the pool is thickly- or thinly-provisioned on the array. If you select a thin pool, a thin rebuild will be
automatically set for the storage volume that is created during the provisioning process.
Free: The amount of storage available for provisioning from the pool.
Total: The total amount of storage available from the pool.
Step Four: In Figure-19 we select a storage view for the virtual volume, or select “None” to continue and select a storage view
later. If you select an active storage view, the virtual volume is added to that storage view and automatically becomes visible to
hosts with access to the storage view. In some cases a rescan of the host may be required (depending on hba and multipathing
versions) If the storage view is inactive, go to the Storage Views screen and ensure that the components necessary to activate
the storage view (hosts initiators, vplex fe-ports, storage volumes). Should you select “None”, you must manually add the virtual
volume to a storage view when ready.
20
Step Five: In Figure-20 we will Review and Finish our selections. This page shows a summary of your selections in the wizard
and a topology map of the volume to be created. Review your selections, and if satisfied, click Finish to submit the provisioning
request. To make changes, click Back to go back to a previous step and make the necessary changes.
Step Six: Reviewing the Results. This page shows the results of the provisioning request. Each provisioning request creates a
provisioning job that runs in the background. To monitor the job, click View Job Status in the lower-right corner of the screen to
go to the Provisioning Jobs page. Otherwise, click Close to close the wizard
21
PROVISIONING FROM STORAGE VOLUMES
The Provision from Storage Volumes wizard allows you to provision a virtual volume directly from a storage volume or preserve
data on an existing storage volume that you want to expose to hosts. The wizard simplifies the provisioning process by
automatically claiming storage (if not already claimed) and creating all of the underlying storage objects (extents and devices),
and then creating a local or distributed virtual volume that is the total capacity of the selected storage volume. When
provisioning from storage volumes, you can create only one virtual volume at a time. Each virtual volume created maps to a
storage volume on the array. You can provision from storage volumes using integrated or non-integrated storage arrays. Refer
to Provisioning Overview for more information on integrated and non-integrated storage.
Creating a new virtual volume is as simple as selecting or creating a consistency group for the volume, selecting mirroring
options (optional), and then selecting an available storage volume from an array on the cluster on which you want to create the
virtual volume. You do not need to claim the storage volume before you begin to provision, and you can select claimed or
unclaimed storage volumes from the selected array. Unclaimed storage volumes will be automatically claimed.
You may launch the Provision from Storage Volumes wizard from any of the below listed locations. Depending on where you
launch the wizard, you are presented with the option to create specific types of consistency groups.
Provisioning storage hover menu (Local or distributed)
Distributed Virtual Volumes screen (Distributed only)
Virtual Volumes screen on a specific cluster (Local only)
Storage Volumes screen on a specific cluster (Local only)
The Provision from Storage Volumes wizard consists of the following steps:
1. Select an existing consistency group for the volume or create a new consistency group
2. Select mirroring options, synchronization options, and provide a name for the virtual volume
3. Select a storage volume to use to create the virtual volume
4. Expose the virtual volume to hosts by selecting a storage view
5. Review your selections and submit the provisioning request
6. View the results of the operation
Step One: Select a consistency group for the virtual volume. For the volume to be created, select an existing consistency group
or create a new one. To view the topology of a consistency group, select the group in the list. In the map that displays to the
right, the storage icon and blue shading around a cluster indicate that the selected consistency group uses storage from that
cluster.
To add the virtual volume to an existing consistency group, select the Add storage to existing consistency group radio
button and then select the consistency group in the list.
To filter the list of consistency groups by name, type the full or partial name of the consistency group in the Filter text
box. To filter by storage location, click the drop down box to the right of the Filter text box, and then select the storage
location.
To create a new consistency group for the virtual volume, select the Create new consistency group radio button.
22
Figure 22 - Step One, Select a Consistency Group
Step Two: Select protection options. Select high availability options, synchronization options, and enter a name for the volume.
If a cluster has only one array, the ability to mirror is disabled for that cluster.
Source Cluster: Select the cluster where you want to create the virtual volume.
High Availability Options: Select a check box to enable mirroring at the cluster indicated. You can select a single
cluster, both clusters, or leave both check boxes unselected to disable mirroring at both clusters. When you select the
check box to enable mirroring at a cluster, Local Mirroring appears above the cluster in the map.
Synchronization Options: Select the appropriate option to indicate whether or not you want to synchronize data from
the source storage volume to the target storage volume.
Volume Name: Type a name for the virtual volume
23
Step Three: Select source storage from cluster-1 and cluster-2 volumes to create the virtual volume. Select a single storage
volume to create the virtual volume. The virtual volume will use the entire capacity of the selected storage volume. You can
select claimed or unclaimed storage volumes. To select the source storage on cluster-1, select an array in the drop down box,
and then select a storage volume from the list of available storage volumes on the array. Repeat process for cluster-2.
Step Four: Expose the virtual volume to hosts. Select a storage view for the virtual volume, or select “None” to continue and
select a storage view later. If you select an active storage view, the virtual volume is added to that storage view and
automatically becomes visible to hosts with access to the storage view. In some cases a rescan of the host may be required. If
the storage view is inactive, go to the Storage Views screen and add the components necessary to activate the storage view. For
more information, refer to Creating a storage view. If you select “None”, when you are ready to expose the virtual volume to
hosts, you must manually add the virtual volume to a storage view.
24
Step Five: Review your selections and finish. This page shows a summary of your selections in the wizard and a topology map
of the volume to be created. Review your selections, and if satisfied, click Finish to submit the provisioning request. To make
changes, click Back to go back to a previous step and make the necessary changes.
Step Six: View results. This page shows the results of the provisioning request. The top portion of the page shows the success
or failure of the operation and the details. Select the row in the top portion of the page to see the expanded details in the Details
view at the bottom of the page. You can scroll through the detailed information and cut and paste information if needed. Click
Close to close the wizard when done.
25
PROVISIONING WITH THE VPLEX CLI
The “storage-tool compose” command will create virtual-volumes on top of the specified storage-volumes, building all
intermediate extents, local, and distributed devices as necessary.
Syntax:
storage-tool compose
[-n|--name] name
[-g|--geometry] {raid-0|raid-1|raid-c}
[-d|--storage-volumes] storage-volume [, storage-volume...]
[-s|--source-storage-volume] storage-volume
[-h|--help]
[--verbose]
Optional Arguments:
[-n|--name] name - * Specifies the name for the new virtual volume. Must be unique across the system.
[-g|--geometry] {raid-0|raid-1|raid-c} - * Specifies the desired geometry to use for the local
devices at each cluster. Valid values are raid-0, raid-1, or raid-c.
[-d|--storage-volumes] storage-volume [, storage-volume...] - * Specifies a list of storage volumes
from which to build the virtual volume. These may be claimed, but must be unused.
[-s|--source-storage-volume] storage-volume - Specifies the storage volume to use as a source mirror
when creating local and distributed devices.
Note: If specified, --source-storage-volume will be used as a source-mirror when creating local and
distributed RAID 1 devices. This will trigger a rebuild from the source-mirror to all other mirrors
of the RAID 1 device (local and distributed). While the rebuild is in progress the new virtual volume
(and supporting local and/or distributed devices) will be in a degraded state, which is normal. This
option only applies to RAID 1 local or distributed devices. The --source-storage-volume may also
appear in --storage-volumes.
[-h|--help] - Displays command line help.
[--verbose] - Provides more help during command execution. This may not have any effect for some
commands.
26
Figure 28 - Provisioning Single Legged Local Volumes
27
Figure 30 - Distributed RAID1
The provisioning job status will persist through any SMS failures or reboots.
Queued jobs may be cancelled, but an in-progress job cannot be.
Local jobs will be executed even if the remote jobs fail on the remote cluster.
In a scenario where one cluster fails, the distributed RAID1 provisioning requests will remain queued until the other
cluster is reachable.
Completed or failed jobs are automatically deleted after seven days. In-progress and canceled jobs must be deleted
manually.
28
Failed: The job has failed. Depending on which step in the process that the failure occurred, you may need to take action to
manually rollback any components that were created. Refer to when a provisioning job fails for more information. The job status
is automatically updated every 10 seconds. However, you can manually refresh the job status at any time by clicking the refresh
icon in the upper right corner of the screen. When the job completes, a record of the job stays on the Provisioning Jobs Status
page for 48 hours, after which time it is automatically deleted.
A single provisioning job is created for each provisioning request, regardless of the number of volumes being provisioned. When
a provisioning job starts, the job is given a name that includes information about the job such as job number and the date and
time the job started. The following is the format and description of a job name:”Provision_#_dd-mm-yy:time”
Breakdown of Name:
“Provision_” is a prefix added to all job names.
“#” is the job number. This number increments with each provisioning job.
29
UNCLAIMING STORAGE VOLUMES CREATED WITH VIAS
When unclaiming storage volumes created with VIAS, you have the additional option to delete the storage volume from the array
and return the allocated storage volume capacity to the storage pool on the array.
APPENDIX:
BUSINESS COMMUNICATION LAYER SERVICE (BOLE)
The new array aware provisioning and registration commands use a new service that communicates across the clusters. This
service is started by default and intended to be a zero touch service. When the SMS starts up the business communication layer
service (BOLE) will be started automatically.
The BOLE Service commands:
VIA log will have specific information about the SMIS provider and direct calls to it from AMP.
Please keep in mind that some errors will happen during provisioning that are due to array restrictions and configurations that
VIAS may not enforce. This means that the native log files on the array may need to be looked at to identify issues.
30
VPLEX 5.3.0.00.00.10 LIMITATIONS
VMAX, VNX, and XtremIO support only
DMX not supported
31