Vmware Cloud Director Object Storage Extension Reference
Vmware Cloud Director Object Storage Extension Reference
Table of Contents
Introduction......................................................................................................................................................................... 4
Audience ............................................................................................................................................................................. 4
Use Cases............................................................................................................................................................................. 6
Multisite Deployment Requirements for VMware Cloud Director Object Storage Extension ....................................................... 24
OSE Configurations............................................................................................................................................................. 27
Public S3 Endpoint........................................................................................................................................................................... 27
Logging ............................................................................................................................................................................................ 29
Abbreviations .................................................................................................................................................................... 44
WHITE PAPER | 3
VMware Cloud Director Object Storage Extension – Reference Design
Introduction
This guide provides information on how to properly design and deploy VMware Cloud Director Object Storage Extension on top of a VMware Cloud
Director infrastructure. This document is specific to VMware Cloud Director Object Storage Extension 2.2 and its integration with Cloudian HyperStore,
Dell EMC ECS, and AWS S3.
Information about how Object Storage Extension can utilize other S3-compatible storage through the Object Storage Interoperability Service (OSIS) can
be found in the Object Storage Interoperability Service Development Guide.
Audience
This document is intended for cloud provider architects and technical leads responsible for planning and executing the deployment and upgrades of a
VMware-based cloud environment.
OSE runs externally to VMware Cloud Director and integrates through a UI plug-in, which shows either provider or tenant information, depending on
the type of logged-in user.
OSE has a 1:1 relationship with a VMware Cloud Director site, which means that only one instance of OSE can be integrated with a single Cloud Director
site. OSE 2.2.3 is compatible with VMware Cloud Director version 10.3 or later, and the Cloud Director Service.
An instance of VMware Cloud Director Object Storage Extension can work with a single site of VMware Cloud Director or a single VMware Cloud
Director server group.
Object Storage Extension can be connected to the following storage providers: Cloudian HyperStore, Dell EMC ECS, AWS S3, or another S3-compatible
storage platform 1. The provider can selectively enable VMware Cloud Director organizations to consume the service. The unique counterparts for
organizations and users are created at the storage provider. The users authenticate to the service with VMware Cloud Director or S3 credentials and
access it only through the UI plug-in. The provider can directly access the underlying storage appliance to set quotas or collect usage information for
billing purposes.
Providers can switch between storage platforms with VMware Cloud Director Object Storage Extension but cannot use two different storage platforms
simultaneously.
In addition to the storage platform that OSE will connect with Cloud Director, two or more (for high availability and scalability) RHEL/CentOS/Oracle
Linux/Ubuntu/Debian/Photon VM nodes that run OSE, provided as an RPM or DEB package, are required. The number of the OSE VM nodes depends
on the used S3 storage and the OSE use case. See for reference: Deployment Options. These VMs are essentially stateless and persist all their data in
PostgreSQL DB version from 10. x to 14.x. This could be VMware Cloud Director external PostgreSQL DB (if available) or a dedicated database for
VMware Cloud Director Object Storage Extension depending on the OSE use case.
VMware Cloud Director Object Storage Extension (OSE) enables Cloud Director tenant users to use object storage by native UI experience and support
S3 clients to consume the object storage by S3 APIs.
To connect Cloud Director with the selected S3 object storage platform, OSE uses the following user mapping:
• VMware Cloud Director provider is mapped to an ECS/Cloudian admin user, or AWS management account.
• VMware Cloud Director tenant org is mapped to an ECS namespace, Cloudian group, or AWS org unit.
• VMware Cloud Director tenant user is mapped to an ECS/Cloudian user, or AWS IAM user.
1 S3-compatible storage can be connected to Cloud Director through the Object Storage Interoperability Service (OSIS).
WHITE PAPER | 4
VMware Cloud Director Object Storage Extension – Reference Design
The latest OSE 2.2.3 release provides the following new features and enhancements:
• VMware Cloud Director Object Storage Extension installation optimization - VIP server and Kubernetes Backup and Restore deployer are no
longer part of the VMware Cloud Director Object Storage Extension installation process.
• API token authentication-Cloud providers can now use an API token, instead of system administrator credentials, to authenticate the
VMware Cloud Director instance where the plug-in is installed.
• Custom Storage User Mapping - Existing users in a supported S3 platform can now be mapped to a tenant organization in VMware Cloud
Director Object Storage Extension. One tenant user can be mapped to multiple S3 storage users.
• OSIS adapter name deprecation - The OSIS adapter name, returned from the API GET /api/info is no longer used and mapped to the local
adapter name in VMware Cloud Director Object Storage Extension.
• Support of OpenSSL 3 - VMware Cloud Director Object Storage Extension 2.2.3 now supports OpenSSL 3 for importing or generating
certificates with the command-line script "ose cert".
• OS and S3 Storage Support - VMware Cloud Director Object Storage Extension 2.2.3 expands the operating system versions it supports and
integrates with DELL ECS 3.8.
WHITE PAPER | 5
VMware Cloud Director Object Storage Extension – Reference Design
Use Cases
VMware Cloud Director natively provides Infrastructure as a Service (Iaas) by integrating with the underlying VMware vSphere platform. All native
storage services such as storage for virtual machines, named (independent) disks, and catalog storage for virtual machine templates and media are
using storage attached to vSphere ESXi hosts such as block storage, NFS, or VMware vSAN.
There is, however, the need for highly scalable, durable, and network-accessible storage that could be utilized by tenants or their workloads without
the dependency on the vSphere layer. The VMware Cloud Director Object Storage Extension (OSE) provides access to the object storage either through
VMware Cloud Director UI extension or via standardized S3 APIs. This allows existing applications to easily access this new type of storage for various
use cases.
Thanks to the OSE full S3 API compatibility, it is also possible to utilize existing 3rd party applications to upload and manage the files of a bucket. In
Object Storage Extension, S3 buckets and their objects can also be accessed through a short S3 endpoint path.
Bucket permissions can be managed either through defining their Access Control Lists or by creating bucket policies. In OSE, bucket objects can be
synced on an org level with the connected S3 object storage.
WHITE PAPER | 6
VMware Cloud Director Object Storage Extension – Reference Design
Bucket objects can also be tagged, and their logs can be kept in another S3 bucket.
WHITE PAPER | 7
VMware Cloud Director Object Storage Extension – Reference Design
In addition, you can manage the lifecycle of the bucket objects by setting the period for which the objects will appear in the bucket before being
automatically deleted.
WHITE PAPER | 8
VMware Cloud Director Object Storage Extension – Reference Design
Server-side tenant-level encryption of bucket content is also possible with OSE. However, it is only applied to new objects.
WHITE PAPER | 10
VMware Cloud Director Object Storage Extension – Reference Design
An entire VMware Cloud Director catalog (consisting of vApp templates and media ISO images) can be captured from an existing Org VCD catalog or
created from scratch by uploading an individual ISO and OVA files to VMware Cloud Director Object Storage Extension. Then, the catalog can be
published, which allows any VMware Cloud Director organization (from any VMware Cloud Director instance) to subscribe to the catalog. As a result,
this OSE functionality enables easy distribution of specific catalogs publicly or geographically across VMware Cloud Director instances.
WHITE PAPER | 11
VMware Cloud Director Object Storage Extension – Reference Design
WHITE PAPER | 12
VMware Cloud Director Object Storage Extension – Reference Design
The Kubernetes clusters that can be protected in OSE 2.2.3 include CSE native, TKG, and external clusters with the latest Kubernetes version.
WHITE PAPER | 13
VMware Cloud Director Object Storage Extension – Reference Design
OSE connects to Cloud Director and the object storage cluster from the backend. OSE makes REST API calls to Cloud Director for tenant and user
mapping for object storage. It also supports object storage-backed catalog contents and vApp backups. OSE connects to the object storage cluster for
tenancy management and data transfer. Depending on the type of the object storage cluster, there could be one port or multiple ports for the
communication between OSE and the object storage cluster.
OSE uses S3 API to make queries to the underlying S3 storage vendor and user identity and access management service to map Cloud Director user
types with those of the connected storage.
OSE uses a PostgreSQL database to store metadata. All management data, bucket metadata, and object metadata are stored in the database. If your
object storage solution is for internal use or a small business, you can consider re-using Cloud Director's PostgreSQL appliance. For a standard
deployment, you should consider deploying a standalone PostgreSQL server for OSE.
The bandwidth consumption between OSE and the object storage cluster is much higher than the communication between OSE and Cloud Director, so
you should consider deploying OSE server nodes into the network with as little latency as the communication with the storage cluster.
OSE also makes REST API calls to VMware Cloud Analytics to send product usage data. This part of the OSE architecture comes into play only if the
tenants agree with the VMware Customer Experience Improvement Program (CEIP) in the Cloud Director UI to allow VMware to collect data for
analysis.
WHITE PAPER | 14
VMware Cloud Director Object Storage Extension – Reference Design
In Object Storage Extension 2.2.3 Kubernetes Backup and Restore is no longer handled by a deployer which was installing Velero in a Kubernetes
cluster. The backup and restore operations in the latest release are handled by a job, which installs Velero in a selected Kubernetes cluster.
OSE Catalogs use vSphere catalog synchronization protocol to sync with the content of the Cloud Director Catalogs.
For vApps, OSE uses REST API to export vApps from Cloud Director to the underlying S3 storage.
voss-keeper system service As a system service, the voss-keeper can be managed by systemctl
command-line utility.
Stopping the voss-keeper service, also stops the OSE Java service on port 443.
OSE Java service application service The public service of VMware Cloud Director Object Storage Extension that
provides the APIs for the data path and the control path on port 443.
Besides OSE-embedded components, the PostgreSQL database should be deployed to persist bucket/object metadata. The following is a high-level
diagram of the OSE components:
WHITE PAPER | 15
VMware Cloud Director Object Storage Extension – Reference Design
Object Storage Extension uses port 443 for communication with Cloud Director, S3 storage, and S3-compliant storage apps. A load balancer is used for
OSE nodes for production deployments to distribute the requests from Cloud Director to the OSE nodes. Through a URL redirect integrated with OSE,
Cloud Director providers can connect to the management console of the underlying S3 storage. Cloud Director cells can also use a load balancer to
distribute the OSE requests to Cloud Director. As part of the Cloud Director deployment, the Transfer Share provides temporary storage for uploads,
downloads, and catalog items that are published or subscribed externally.
OSE connects through port 5432 to the PostgreSQL database, which keeps the metadata of the stored objects.
WHITE PAPER | 16
VMware Cloud Director Object Storage Extension – Reference Design
WHITE PAPER | 17
VMware Cloud Director Object Storage Extension – Reference Design
WHITE PAPER | 18
VMware Cloud Director Object Storage Extension – Reference Design
• Deploy OSE to a local data center – Deploying it makes it easier to retain all management metadata in your local cloud. Also, AWS charges the
storage and data transfer outside of the AWS region. For more information, see AWS S3 Pricing.
• Deploy OSE to AWS - Deploying to AWS has the advantage of the least network latency for the data path. By setting up a Gateway VPC endpoint
between the OSE nodes and AWS S3, the cost for the data transfer from OSE to S3 can be eliminated.
WHITE PAPER | 19
VMware Cloud Director Object Storage Extension – Reference Design
An OSIS adapter needs to be implemented for the administration work on the object storage cluster. The OSIS Adapter can be deployed on a
standalone machine or the local host of the OSE server node. The benefit of deploying the OSIS adapter on the OSE node eliminates the need to set an
additional load balancer between OSE and the OSIS adapter.
WHITE PAPER | 20
VMware Cloud Director Object Storage Extension – Reference Design
Deployment Options
Based on the use case, user target group, and expected service parameters (SLA, scalability), the cloud provider can decide on the type of deployment.
Small Deployment
Usage: Niche use cases
• One or more RHEL/CentOS VMs for VMware Cloud Director. External PostgreSQL database (used for VMware Cloud Director and VMware
Cloud Director Object Storage Extension). NFS transfer share is needed when more than one VMware Cloud Director cell is used. Protected
with vSphere HA.
• One CentOS Linux 7 or 8/RedHat Enterprise Linux 7/Oracle Linux 7/Ubuntu 18+/Photon 3+/Debian 10+ VM: (4 vCPU, 8 GB RAM, 120 GB HDD)
running VMware Cloud Director Object Storage Extension. Protected with vSphere HA.
• Storage provider: Three CentOS virtual machines running Cloudian HyperStore, or Five CentOS virtual machines running Dell EMC ECS (4
vCPUs, 32 GB RAM, 32+100 GB HDD on shared storage) or AWS S3.
• Load balancing: VMware Cloud Director cells and Cloudian HyperStore or Dell EMC ECS nodes load balancing provided by NSX.
Medium Deployment
Usage: typical use cases
• Multiple RHEL/CentOS or appliance VMs for VMware Cloud Director. NFS transfer share. For non-appliance form factor external PostgreSQL
database.
• One or more CentOS Linux 7 or 8/RedHat Enterprise Linux 7/Oracle Linux 7/Ubuntu 18+/Photon 3+/Debian 10+ VMs: (8 vCPU, 8 GB RAM,
120 GB HDD) running VMware Cloud Director Object Storage Extension. Protected with vSphere HA and optionally load balanced. I f VMware
Cloud Director is deployed in appliance form factor, an external PostgreSQL database is needed.
• Storage provider: Three CentOS virtual machines running Cloudian HyperStore, Five CentOS virtual machines running Dell EMC ECS on
dedicated ESXi hosts with local disks (8 vCPUs, 64 GB RAM, 32 GB HDD + multiple large local disks) or AWS S3.
• Load balancing: VMware Cloud Director cells and Cloudian HyperStore, or Dell EMC ECS nodes load balancing provided by NSX or external
hardware load balancer.
Large Deployment
Usage: large scale, low cost per GB use cases
• Multiple RHEL/CentOS or appliance VMs for VMware Cloud Director. NFS transfer share. For non-appliance form factor external PostgreSQL
database.
• Multiple CentOS Linux 7 or 8/RedHat Enterprise Linux 7/Oracle Linux 7/Ubuntu 18+/Photon 3+/Debian 10+ VMs (12 vCPU, 12 GB RAM, 120
GB HDD) running VMware Cloud Director Object Storage Extension. If VMware Cloud Director is deployed in an appliance form factor, an
external HA PostgreSQL database is needed.
• Storage provider: Three or more dedicated bare-metal physical Cloudian HyperStore, Five or more physical Dell EMC ECS, or AWS S3.
WHITE PAPER | 21
VMware Cloud Director Object Storage Extension – Reference Design
The following figures display how to scale out and load balance Object Storage Extension with Cloudian HyperStore, Dell EMC ECS, and AWS S3.
Figure 19: Example of Scale Out of Object Storage Extension Deployment with Load Balancing
WHITE PAPER | 22
VMware Cloud Director Object Storage Extension – Reference Design
Multisite Deployment
Object Storage Extension supports VMware Cloud Director multisite deployments where different VMware Cloud Director instances are federated
(associated) with a trust relationship. As these instances can be deployed in different locations, the end-users can deploy their applications with a
higher level of resiliency and not be impacted by local datacenter outages.
Each VMware Cloud Director instance has its own VMware Cloud Director Object Storage Extension, which communicates with shared S3 object
storage deployed in a multi-datacenter configuration. Objects are automatically replicated across all data centers, and VMware Cloud Director users
can access them through either VMware Cloud Director or VMware Cloud Director Object Storage Extension endpoint.
Within a multisite architecture, you can configure VMware Cloud Director Object Storage Extension instances with a standalone virtual data center in
each site. The following diagram illustrates the architecture.
Figure 20: OSE Multisite Architecture: Single S3 Cluster for Multiple DCs
You can also configure VMware Cloud Director Object Storage Extension instances in different sites to use a single virtual data center. The following
diagram illustrates the architecture.
WHITE PAPER | 23
VMware Cloud Director Object Storage Extension – Reference Design
When you configure the multisite feature, you create a cluster of multiple VMware Cloud Director Object Storage Extension instances to create an
availability zone. You can group the VMware Cloud Director Object Storage Extension instances together only in a single region. A region is a collection
of the compute resources in a geographic area. Regions are isolated and independent of one another. VMware Cloud Director Object Storage Extension
does not support multi-region architectures.
You can share the same buckets and objects across tenant organizations within a multisite environment. To share buckets and objects across sites, map
all tenant organizations to the same storage group. See Edit Tenant Mapping Configuration.
Multisite Deployment Requirements for VMware Cloud Director Object Storage Extension
When you configure the multi-site single region feature with VMware Cloud Director Object Storage Extension, consider the following requirements.
Associate the VMware Cloud Director sites that you want to use in the multisite environment. For more information, see the VMware Cloud Director
Cloud Provider Admin Portal Guide.
• Deploy and configure a VMware Cloud Director Object Storage Extension instance in each site.
• You can share your storage platform cluster across sites, or you can deploy and configure all required storage components in each site.
• For Cloudian HyperStore, set up a storage policy with a multi-DC data distribution group.
• For ECS, set up replication groups across the virtual data centers.
• For AWS S3, use the same AWS payer account to configure VMware Cloud Director Object Storage Extension. Also, make sure that all VMware Cloud
Director Object Storage Extension sites are configured with an AWS S3 endpoint in the same region.
WHITE PAPER | 24
VMware Cloud Director Object Storage Extension – Reference Design
WHITE PAPER | 25
VMware Cloud Director Object Storage Extension – Reference Design
OSE Scalability
OSE can be deployed as a cluster for high availability and distribution of hardware resources.
In the typical deployment topology, there are multiple OSE instances, multiple storage platform instances, and the database HA.
Procedure
1. Follow these instructions to configure the OSE certificate, database, and Cloud Director UI plugin.
2. Configure the connection to the Cloudian HyperStore Admin endpoint via the load balancer.
ose cloudian admin set --url hyperstore-lb-admin-url --user admin-user --secret 'password'
3. Configure the connection to the Cloudian HyperStore S3 endpoint via the load balancer.
4. Configure the connection to Cloudian HyperStore IAM endpoint via the load balancer.
5. Configure the connection to the HyperStore Web Console via the load balancer.
ose cloudian console set --url hyperstore-lb-cmc-url --user admin-user --secret cmc-sso-shared-key
7. Start OSE.
8. Log in to Cloud Director and launch OSE to check whether it works normally.
ssh user@host-ip
3. Copy the exported configuration file to the VMs of the other OSE instances.
4. SSH connect to the VMs of the other OSE instances and replicate the configuration by importing the configuration file.
Now the OSE cluster is created. In general, OSE instances are stateless, and all data is persisted in the shared database, so it is possible to add more
nodes on demand.
OSE Configurations
OSE Java Service
OSE Java service is built with Spring Boot, which offers both administrative and S3 APIs for OSE UI plug-in and S3 API users.
First, the command ose service [start|stop] can launch and shut down the OSE Java service. The dedicated OSE CLI, e.g., ose cloudian
admin set, can set basic configuration for the OSE service. The system administrator can also tune the OSE service with many other configurable
properties by using the CLI command ose args set. Here are two examples.
• To make OSE work in virtual-hosted style for S3 API, use the command:
ose args set -k s3.client.path.style.access -v false
• For a huge bucket (containing more than one hundred thousand objects), the object count for the bucket is estimated by default for performance
consideration. The estimation can be turned off by the command:
ose args set -k oss.object.count.estimate -v false
As a Java service, the JVM properties can also be set for the OSE instance. In some cases, the storage platform could be in another network that is
accessible by OSE through a configured proxy server. The system administrator can set the JVM proxy options for OSE by using the command:
ose jvmargs -v "Dhttp.proxyHost=proxy.cloud.com -Dhttp.proxyPort=3128"
PostgreSQL Database
OSE uses a PostgreSQL database for storing the metadata of its S3 storage-related operations. The recommended hardware requirements for the
database are 8 Core CPUs and 12 GB RAM for most OSE deployments.
An impact on the database disk usage will have the object count, not the object content size. The more objects you create in the system, the more disk
space the database occupies. Many factors determine disk space consumption. Roughly one million objects cost about 0.6GB disk. Database indexes
and logs will also consume disk. So, assuming you have one billion objects in an object storage cluster, you need to prepare more than 700GB of disk
for the database machine.
There is a table object_info in the OSE database containing rows for each managed object. If OSE handles twenty million objects, the table will
have twenty million rows. Querying such a table could be a performance bottleneck if the database machine has limited CPU and memory resources.
Now that we have the estimation for the database disk consumption with object count (about 0.6GB/million objects), it’s recommended to allocate a
buffer for the disk size at the beginning.
Public S3 Endpoint
S3-compliant API has two path formats:
• Path-Style Requests. The path pattern for Amazon S3 is https://s3.Region.amazonaws.com/bucket-name/key name, for example, https://s3.us-west-
2.amazonaws.com/mybucket/puppy.jpg.
WHITE PAPER | 27
VMware Cloud Director Object Storage Extension – Reference Design
• Virtual Hosted-Style Requests. The path pattern for Amazon S3 is https://bucket-name.s3.Region.amazonaws.com/key name, for
example https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png.
OSE supports both styles of S3 endpoint, but the segment region is not on the S3 URI; assumed your organization's root FQDN is https://acme.com.
Procedure
1. Run the command to turn off the path style and switch to the virtual-hosted style:
ose args set -k s3.client.path.style.access -v false
2. Restart the ose service.
ose service restart
3. Configure wildcard DNS mapping for OSE S3 endpoint, i.e., map all *.s3.acme.com to the OSE load balancer.
4. Create a wildcard SSL certificate for the wildcard FQDN, i.e., make a common name as *.s3.acme.com.
WHITE PAPER | 28
VMware Cloud Director Object Storage Extension – Reference Design
Logging
The OSE logging level has an impact on the performance. To improve the performance, do not turn on the DEBUG logging. Besides, every request
access is logged by default. It can be turned off as well.
The following examples show how to set the logging level to WARN or turn off logging. After changing the log level or turning it off, you need to restart
the OSE service.
If needed, you can increase the I/O thread count to gain performance out of I/O. However, the number should not be too high. For example, if OSE has
8 cores with 1 socket for each host, the default I/O threads for OSE is 2 * 8 = 16. You can increase the number to 24 with the command below:
ose args set --k server.undertow.threads.io --v 24
WHITE PAPER | 29
VMware Cloud Director Object Storage Extension – Reference Design
Note: The below setting is insufficient to increase the concurrency of database connections. You should consider increasing the max connection count
on the PostgreSQL side simultaneously. For example, if the PostgreSQL server's max connection count is 1000, and you have deployed 5 OSE server
nodes, then the average connection count to each OSE node should be less than the max connection count divided by the OSE node count, e.g., < 200.
ose args set --k spring.datasource.hikari.maximumPoolSize --v 180
Other settings for the database connection pool can be seen below. For term explanation, please refer
to https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby.
ose args set --k spring.datasource.hikari.maxLifetime --v 1800000
Object count estimation is adopted for such buckets. The threshold is a hundred thousand objects per bucket. Use the following commands to adjust
the threshold or turn off the estimation.
WHITE PAPER | 30
VMware Cloud Director Object Storage Extension – Reference Design
The optional argument --start defines the start time for the logs to be collected. The default value is 2018-01-01.
The optional argument --end defines the end time for the logs to be collected. If not specified, the end date is the current date.
WHITE PAPER | 31
VMware Cloud Director Object Storage Extension – Reference Design
To assess the impact of OSE proxying of S3 APIs, the same tests were performed directly to the Cloudian HyperStore (through a load balancer). The
following diagram shows the network flows of the S3 API communication.
Note that HTTPS was used both for front-end traffic (COSBench to Object Storage Extension nodes) and backend traffic (Object Storage Extension to
Cloudian HyperStore or COSBench to Cloudian HyperStore).
WHITE PAPER | 32
VMware Cloud Director Object Storage Extension – Reference Design
Workloads: 200 workers doing writes and reads to 10 buckets with 100 MB objects
Table 4. Cloudian HyperStore - HTTPS Write/Read of 100 MB Objects by 200 Workers across 10 Buckets
Scenario 2 - Concurrency
Workloads: Write, read and delete for object size 100 MB for different concurrency level (10 – 200 workers).
WHITE PAPER | 33
VMware Cloud Director Object Storage Extension – Reference Design
Table 5. HTTPS 100 MB Objects with Various Concurrency 10, 50, and 200 Workers
Workloads: 200 workers doing writes and reads to 10 buckets with 1 MB objects
Step 0: Prepare data for read
Step 1: Write for 5 mins
Step 2: Read for 5 mins
Step 3: Delete for 5 mins
Step 4: Clean up all buckets and objects
Table 6. Read and write of small objects by 200 Workers across 10 Buckets
Workloads: Write, read and delete for various object sizes ranging from 1 MB, 10 MB and 100 MB with 200 workers across 10 buckets.
WHITE PAPER | 34
VMware Cloud Director Object Storage Extension – Reference Design
Conclusion
As can be seen from the above test results vCloud Director Object Storage Extension performance is very much in line with the pure storage platform
performance and does not add significant overhead with the maximums around 5 - 15%.
We have also noted that, the smaller object size with lesser concurrency adds performance overhead.
WHITE PAPER | 35
VMware Cloud Director Object Storage Extension – Reference Design
In this test setup, Object Storage Extension was deployed in a five-node configuration. The object storage platform consists of five load-balanced
hardware appliances Dell EMC ECS. The workloads were simulated by three VM nodes running COSBench software, which is the industry standard
benchmark tool for object storage. To assess the impact of the Storage Extension proxying of S3 APIs, the same tests were performed directly to the
ECS nodes (through the load balancer). The following diagrams show the network flows of the S3 API communication.
WHITE PAPER | 36
VMware Cloud Director Object Storage Extension – Reference Design
Workloads: 100 workers doing writes and reads to 25 buckets with 10 MB objects.
Table 9. Dell EMC ECS - HTTPs 10 MB Objects with Concurrency of 100 Workers across 25 Buckets
Workloads: Write, read, and delete for object size 100 MB for different concurrency level (10 – 100 workers).
WHITE PAPER | 37
VMware Cloud Director Object Storage Extension – Reference Design
Table 10. Dell EMC ECS - HTTPs 100 MB Objects with Concurrency of [10-100] Workers
Workloads: Write, read, and delete for object size 4 KB with 200 workers across 30 buckets.
Table 11. Dell EMC ECS - HTTPs 4 MB Objects with Concurrency of 100 Workers across 30 Buckets
Workloads: Write, read, and delete for various objects ranging from 1 MB – 1 GB with 100 workers across 100 buckets.
WHITE PAPER | 38
VMware Cloud Director Object Storage Extension – Reference Design
Table 12. Dell EMC ECS - HTTPs 1 MB – 1GB Objects with Concurrency of 100 Workers across 100 Buckets
Conclusion
As can be seen from the above test results, VMware Cloud Director Object Storage Extension performance is much in line with the pure storage
platform performance. It adds overhead with maximums around 5-25%.
WHITE PAPER | 39
VMware Cloud Director Object Storage Extension – Reference Design
VMware Cloud Director cells 3 10.2 Appliance deployment (2 CPU, 12 GB RAM, 132 GB HDD)
VMware Cloud Director Object 3 2.2.3 CentOS 7 VM (8 vCPUs, 8 GB RAM, 128 GB HDD)
Storage Extension nodes
AWS S3 1
WHITE PAPER | 40
VMware Cloud Director Object Storage Extension – Reference Design
Workloads: 100 workers doing writes and reads to 25 buckets with 10 MB objects.
Table 14. AWS- HTTPs 10 MB Objects with Concurrency of 100 Workers across 25 Buckets
Workloads: Write, read, and delete for object size 100 MB for different concurrency levels (10 – 200 workers).
Table 15. AWS- HTTPs 100 MB Objects with Concurrency of [10-200] Workers
WHITE PAPER | 41
VMware Cloud Director Object Storage Extension – Reference Design
Workloads: Write, read, and delete for object size 1 MB with 1 00 workers across 30 buckets.
Table 16. AWS- HTTPs 1 MB Objects with Concurrency of 100 Workers across 30 Buckets
Workloads: Write, read, and delete for various object sizes ranging from 1 MB – 1 GB with 100 workers across 100 buckets
Table 17. AWS- HTTPs 1 MB -1 GB Objects with Concurrency of 100 Workers across 100 Buckets
WHITE PAPER | 42
VMware Cloud Director Object Storage Extension – Reference Design
Conclusion
As can be seen from the test results above, VMware Cloud Director Object Storage Extension performance is much in line with the pure storage
platform performance. It does not add significant overhead with maximums around 5- 13%.
WHITE PAPER | 43
VMware Cloud Director Object Storage Extension – Reference Design
Abbreviations
WHITE PAPER | 44
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 vmware.com Copyright © 2022 VMware, Inc.
All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents
listed at vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States and other jurisdictions.
All other marks and names mentioned herein may be trademarks of their respective companies.