IBM Cloud Object Storage Concepts and Architecture - System Edition
IBM Cloud Object Storage Concepts and Architecture - System Edition
Alexander Gavrin
Bradley Leonard
Hao Jia
Johan Verstrepen
Jussi Lehtinen
Lars Lauber
Patrik Jelinko
Raj Shah
Steven Pratt
Vasfi Gucer
Redpaper
Introduction
Object Storage is the primary storage solution that is used in the cloud and on-premises
solutions as a central storage platform for unstructured data. Object Storage is growing more
popular for the following reasons:
It is designed for exabyte scale.
It is easy to manage and yet meets the growing demands of enterprises for a broad set of
applications and workloads.
It allows users to balance storage cost, location, and compliance control requirements
across data sets and essential applications.
IBM® Cloud Object Storage (IBM COS) system provides industry-leading flexibility that
enables your organization to handle unpredictable but always changing needs of business
and evolving workloads.
IBM COS system is a software-defined storage solution that is hardware aware. This
awareness allows IBM COS to be an enterprise-grade storage solution that is highly available
and reliable and uses commodity x86 servers. IBM COS takes full advantage of this hardware
awareness by ensuring that the server performs optimally from a monitoring, management,
and performance perspective.
The target audience for this IBM Redpaper publication is IBM COS architects, IT specialists,
and technologists.
This paper is the third edition of the paper IBM Cloud Object Storage Concepts and
Architecture, REDP5537-00, that was originally published on May 29, 2019.
IBM COS includes a rich set of features to match various use cases.
IBM validated more than 100 IBM and third-party applications with IBM COS and created
extensive technical documents that describe their interoperability.
Tip: Most applications that support S3 API can use IBM COS for storage.
For more information, see the IBM Cloud® Object Storage website:
https://www.ibm.com/products/cloud-object-storage.
Figure 2 shows the differences between block, file, and Object Storage.
3
Figure 2 Differences between block, file, and Object Storage
Industry-specific use cases for IBM COS include the following examples:
Healthcare and Life Sciences:
– Medical imaging, such as picture archiving and communication system (PACS) and
magnetic resonance imaging (MRI)
– Genomics research data
– Health Insurance Portability and Accountability Act (HIPAA) of 1996 regulated data
Media and entertainment; for example, audio and video
Financial services; for example, regulated data that requires long-term retention or
immutability
For information about use cases, see IBM Cloud Object Storage System Product Guide,
SG24-8439.
Note: IBM COS Software is available in several licensing models, including perpetual,
subscription, or consumption.
This IBM Redpaper publication explains the architecture of IBM Cloud Object Storage
on-premises offering and the technology behind the product. For more information about
the IBM Cloud Object Storage use case scenarios and deployment options, see IBM Cloud
Object Storage System Product Guide, SG24-8439.
For more information about the IBM Cloud Object Storage public cloud offering, see the
following publications:
Cloud Object Storage as a Service: IBM Cloud Object Storage from Theory to Practice,
SG24-8385
How to Use IBM Cloud Object Storage When Building and Operating Cloud Native
Applications, REDP-5491
5
IBM Cloud Object Storage architecture
IBM COS is a dispersed storage system that uses several storage nodes to store pieces of
the data across the available nodes. IBM COS uses an Information Dispersal Algorithm (IDA)
to break objects into encoded and encrypted slices that are then distributed to the storage
nodes.
No single node has all of the data. This configuration makes it safe and less susceptible to
data breaches while needing only a subset of the storage nodes to be available to retrieve the
stored data. This ability to reassemble all the data from a subset of the slices dramatically
increases the tolerance to node and disk failures.
The IBM COS architecture is composed of the following functional components. Each of
these components runs IBM COS software that can be deployed on certified, industry
standard hardware:
IBM Cloud Object Storage Manager
IBM Cloud Object Storage Manager provides a management interface that is used for
administrative tasks, such as system configuration, storage provisioning, and monitoring
the health and performance of the system.
The Manager can be deployed as a physical appliance, VMware virtual machine, or
Docker container.
IBM Cloud Object Storage Accesser® node
IBM Cloud Object Storage Accesser node encrypts and encodes data on write and
decodes and decrypts it on read. It is a stateless component that presents the storage
interfaces to the client applications and transforms data by using an IDA.
The Accesser node can be deployed as a physical appliance, VMware virtual machine,
Docker container, or can run as an embedded Accesser node on the IBM Slicestor®
appliance.
IBM Cloud Object Storage Slicestor node
The IBM Cloud Object Storage Slicestor node is responsible for storing the data slices. It
receives data from the Accesser node on write and returns data to the Accesser node as
required by reads. The Slicestor also ensures the integrity of the saved data and rebuilds if
necessary.
Slicestor nodes are deployed as physical appliances.
Figure 4 shows a simple architecture layout of the different components in IBM COS.
S3 interface: IBM COS uses the S3 interface for all storage operations; for example:
PUT: Writes an object to the storage.
GET: Reads an object from the storage.
DELETE: Deletes an object from storage.
LIST: Lists objects that are in a bucket.
All API calls are issued against an IBM COS Accesser node.
Core concepts
This section provides information about IBM COS core concepts. Figure 5 shows the major
IBM COS logical concepts.
7
Figure 5 IBM Cloud Object Storage logical concepts
Device sets
IBM COS uses the concept of device sets to group Slicestor devices (see Figure 6 on page 8).
Each device set consists of several Slicestor devices.
Device sets can be spread across one or multiple data centers. All Slicestor nodes in one
device set must have the same configuration (Slicestor node model, number of drives, and
drive size).
Device sets in a storage pool can have different configurations. This configuration enables
adding newer Slicestor nodes to a system without replacing older Slicestor nodes.
Note: Storage pool expansion must follow specific rules. For more information, see IBM
Cloud Object Storage System Product Guide, SG24-8439.
Vaults
Vaults are logical storage containers for data objects that are contained in a storage pool, as
shown in Figure 8.
9
Figure 8 IBM Cloud Object Storage vault
Vaults are deployed on a storage pool and automatically spread across all the device sets.
One or more vaults can be deployed to a storage pool.
Mirrored vaults
A vault that is on one storage pool can be mirrored to a vault on another storage pool,
commonly in a different location. Both component vaults are controlled by a mirror and
storage operations are issued against the mirror. All objects in the mirror are available on both
vaults. This concept is usually seen in a two site deployment, but can be used for other use
cases, such as hub and spoke design.
A mirrored setup across two different sites protects the IBM COS system against a site
failure. If one site is unavailable, reads and writes occur from the available vault automatically.
A failover procedure is not required if the application can reach a functioning Accesser node
at either site. A failback procedure is not required when the site comes back online.
Access pools
An access pool consists of one or more Accesser nodes, which present a vault to an
application. More than one access pool can separate traffic or restrict access to certain
vaults. This way, a tenant separation can be implemented.
The connection between access pools and vaults is a many-to-many connection. One vault
can be deployed on many access pools and one access pool can have more than one vault
deployed.
Tip: If the read threshold is set higher, the IBM COS system can survive fewer failures,
but the storage efficiency is better.
Write threshold: The write threshold of an IDA is the number of slices of the width that
need to be written before the Accesser node returns the success to the client. The write
threshold always must be higher than the read threshold so that the data is available, even
if a failure occurs right after the write is completed. For example, if the write threshold of
a 12-wide system is set to 8, the system musty successfully write eight slices to complete
a write request.
Tip: If the write threshold is set lower, the IBM COS system can survive more failures,
but the storage efficiency suffers because of the higher redundancies.
Expansion factor: The expansion factor is calculated as the width divided by the read
threshold. It also defines the ratio of raw capacity versus usable capacity. See Table 1 on
page 12 for examples.
Dispersal modes
IBM COS can operate in two different dispersal modes, as shown in Figure 9.
In Standard Dispersal Mode (SD Mode), which is also called non-Concentrated Dispersal
Mode, each slice is written on a different Slicestor node. This mode ensures the highest
performance and availability because one Slicestor node down means that only one slice is
11
unavailable. The SD Mode is usually used in larger configurations and supported on systems
with at least 12 nodes.
Note: SD Mode allows you to configure width, read, and write thresholds. For more
information about the IDA configuration guidelines, see IBM Cloud Object Storage System
Product Guide, SG24-8439.
In Concentrated Dispersal Mode (CD Mode), multiple slices of a single object segment are
placed on a single Slicestor node, but never on the same disk. This mode enables cost
efficient smaller systems starting from 72 terabytes to a few petabytes. If one Slicestor node
goes down, more slices become unavailable. CD Mode is supported on systems starting with
three Slicestor nodes.
Note: CD Mode defines preconfigured IDAs that are optimized for storage or performance.
Location options
IBM COS offers options to be deployed in one or more sites. A single site setup does not
protect against a site failure, although it does provide the lowest possible overhead and the
best latency. Two sites are typically set up as a mirrored configuration. IBM COS plays out all
advantages in a minimum three sites geo-dispersed setup. Slicestor nodes are distributed
across multiple sites for reliability and availability. In a geo-dispersed setup, IBM COS relies
on a single copy of data that is protected by way of erasure coding against site failures.
The nodes of a single IBM COS system can be spread across distances of thousands of
kilometers if the round-trip latency between nodes does not exceed 100 milliseconds.
Container mode
The default for an IBM COS system is vault mode, which is suitable for most customer
deployments. Systems that require thousands or millions of buckets or tenants, IBM COS can
be deployed in container mode.
Note: The general term for a logical storage unit in S3 is a bucket. In vault mode, a bucket
is referred to as a vault. In container mode, a bucket is referred to as a container.
Container mode provides the following capabilities to the IBM COS system:
Support for millions of buckets
Support for millions of users
Self-provisioning capability for service by using RESTful API
Support for billing users based on usage
Isolation of objects between users and tenants
Recent software versions introduced several enhancements for container mode, including the
Storage Account Portal and S3 versioning. These options are discussed later in this section.
For more information on container vaults in ClevOS version 3.15.6 and later, see the Manager
Administration Guide and the Container Mode Guide:
https://www.ibm.com/docs/en/coss/latest?topic=mode-container-guide
https://www.ibm.com/docs/en/coss/latest?topic=vaults-container
https://www.ibm.com/docs/en/coss/latest?topic=vaults-configure-container-mode
The main features of vault and container mode are compared in Table 2.
13
Table 2 Vault and container mode comparison
Feature Vault mode Container mode
Bucket, user, permission Via GUI or REST API on the Via Service API on the
management Manager node Accesser nodes or Storage
Account Portal in the Manager
interface (ClevOS 3.15.4+)
Supported dispersal modes Standard and Concentrated Standard Dispersal Mode and
Dispersal Mode Concentrated Dispersal mode
(ClevOS 3.15.1+)
Note: Note: In the previous ClevOS versions, deployment clients needed a solution for the
container mode. For example, a portal or script-based procedures for managing accounts,
credentials, buckets using the Service API.
As of today, some container properties cannot be adjusted from the graphical user
interface and modification still require the usage of the Service API. For example:
Hard quota settings
IP allow/disallow (firewall) configurations
Kafka notifications configurations
The portal also contains a Usage Metrics section, which allows the administrator to generate
reports on the following:
Current storage usage by container
Aggregated storage usage over a specific period of time by a storage account
Daily historical usage over a specified date-range by container
The report can be exported in CSV, JSON, and XML formats. The Service API has been
extended to allow for custom historic usage queries when the existing export options are not
sufficient.
Note: For more information on the Container Mode Service API, see:
https://www.ibm.com/docs/en/STXNRM_latest/kc_pdf_files.html, Table 3 -
Developer Guides
https://www.ibm.com/docs/en/coss/latest?topic=mode-using-service-apis-manger-r
est-api
15
S3 versioning
ClevOS 3.15.7 introduces S3 Object Versioning for containers in container mode. This feature
is already available for standard vaults. Versioning can be enabled or disabled on a per bucket
basis using S3 API calls.
Slight differences between the standard vault and the container vault implementation must be
considered when moving from a vault to a container with versioning enabled:
1. Limit on number of version for an object
– Container vault: No limit
– Standard vault: Maximum of 1000 versions
2. If an object with versionID of null exists when versioning is disabled or suspended, then:
– In a container vault this null version will be overwritten when the object is modified.
– In a standard vault, the null version will be saved with a new versionID on a subsequent
overwrite.
Note: More information on the changes in regards to S3 object versioining can be found in
the release notes of ClevOS 3.15.7:
https://delivery04.dhe.ibm.com/sar/CMA/SSA/09ogz/2/IBM_Release_Notes_3.15.7.pdf
Segmentation
If the object is larger than 4 MiB, the Accesser node splits the data into 4 MiB segments for
optimal performance. For example, a 1 GiB object is split into 250 4 MiB segments, as shown
in Figure 12.
Figure 12 IBM Cloud Object Storage Accesser node creating a 4 MiB segments
Note: Smaller objects (less than 1 MiB) are stored together in bin files. This configuration
enables better space management and faster read, write, and list operations.
SecureSlice
SecureSlice uses an all-or-nothing-transform (AONT) to encrypt the data. AONT is a type of
encryption in which the information can be deciphered only if all the content is known.
17
6. Append the result to the encrypted data to create the AONT package.
Note: In general, the erasure coding transforms a message of k symbols into a longer
message with n symbols such that the original message can be recovered from a subset of
the n symbols (only k symbols are needed to reconstruct the data). In the case of IBM COS
erasure coding, k is always the read threshold and n is the width.
19
Writing data in Standard Dispersal Mode
If the storage pool is configured to use SD Mode, each Slicestor node in a device set stores a
single slice, as shown in Figure 15.
For mirrored vaults, the following modes are available when the mirror is created:
Asynchronous (default setting)
The Accesser node sends acknowledgment to the client application after the write
operation is confirmed on either side of the mirrors.
Synchronous
The Accesser node sends acknowledgment to the client application after the write
operation completes on both sides of the mirror but both sides do not have to succeed in
writing the data.
Hard Synchronous
The Accesser node sends acknowledgment to the client application after both sides of the
mirror confirmed the successful write.
21
Figure 18 shows more information about each mirror mode’s advantages and disadvantages.
For more information about writing to mirrored vaults, see IBM Documentation.
SmartWrite optimistically attempts to write all slices. After the required write threshold of
slices is achieved, SmartWrite considers the write successful. The Accesser node attempts to
write the remaining slices asynchronously.
If a slice write operation times out on the Accesser node, no further attempts to write the slice
are made. The SmartWrite feature is always active and increases overall system write
performance in occasions; for example, when:
Network connection to a site or a Slicestor node is slower or the connection is down
A degraded (limping) Slicestor node is in the system
Tip: IBM COS supports the S3 API range read feature, which means that applications can
request a full object or a part of an object.
Figure 20 shows reading data from IBM COS. Reading is successful, even if multiple Slicestor
nodes are down. IBM COS requires only the read threshold number of Slicestor nodes that
are available to reconstruct the data.
23
Figure 20 Reading data from IBM Cloud Object Storage
25
Figure 23 shows the benefits for PSS when storing small files.
What this means for you: Unlike other Object Storage systems that use
space-consuming mirroring for small objects, IBM Cloud Object Storage uses advanced
erasure coding for all object sizes, small or large. This feature enables an efficient and
reliable way of storing data.
The goals of the new architecture are to provide significant impact on the following features:
Performance:
– Efficient support of media that prefers or requires sequential writes, specifically
shingled magnetic recording (SMR), hard disk drive (HDD), and solid-state drive
(SSD).
– I/O performance approaches the theoretical media limits for reads and writes for both
throughput and latency.
– High performance against a wide range of system loads without manual configuration.
Resilience:
– Full internal consistency after hardware or software crashes (no orphaned, dangling, or
unaccounted usage) without the use of offline correction tools (for example, file system
consistency check) or full journaling of writes.
– Able to limit collateral damage from most unrecoverable read errors on disks to a small
amount of data loss (which is then rebuilt from other nodes).
– Object level atomicity (old or new version of object guaranteed to be available), even on
a single power grid.
– Synchronous mode in which success is not acknowledged to user until data is durable.
– Predictability under adverse conditions, such as limping, failing, malfunctioning
hardware.
– Protection against many attacks based on naming or content of the data written.
27
Space efficiency:
– Allow filling the media much closer to 100%.
– Uniform and stable performance that is largely independent of level of storage use and
number of objects.
– Small amount of overhead stored per object.
Supportability:
– Extensible architecture for future system customization and enhancements.
– Shorter code stack to hardware, which allows better maintainability and easier root
causing of issues.
– Comprehensive specialized tooling to allow for easier debugging, state introspection,
and improved root cause determination.
– Extensive statistics to allow better introspection into system behavior.
Automatic and continuous integrity checking and error correction for all slices and objects.
An IBM COS system can be designed for 99.9999999999999% reliability (durability). To
put it into perspective, this means you might expect to lose 1 byte of data in every 1 trillion
years.
What this means for you: IBM Cloud Object Storage provides always-on availability,
which means it can even tolerate catastrophic regional outage without downtime or
intervention. Continuous availability and ultimate reliability is built into the architecture.
What this means for you: IBM COS is designed to provide you with detailed information
about the state of drives in the solution. From disk states to built-in monitoring, the IBM
COS solution includes various capabilities that are designed to help with DLM. IBM COS
improves data reliability by proactively monitoring for disk medium errors.
Rebuilder
The following software agents comprise the Rebuilder:
Scanning agent
This agent checks the consistency of the slice names and revision numbers that are held
by the Slicestor nodes.
All Slicestor nodes scan for missing slices and limit the number of listing requests that are
processed at one time to throttle the scanning operations.
A full scan of an IBM COS system should not take more than 48 hours, even for an
exabyte scale system. Therefore, missing slices are detected within 48 hours.
Rebuild agent
This agent retrieves the new slice data from other Slicestor nodes to repair a slice on a
Slicestor node.
A rebuild agent on a Slicestor node addresses scenarios where slices are missing from
their respective Slicestor nodes or which are corrupted on a drive.
Each Slicestor node in a storage pool ensures data integrity across all of the Slicestor
nodes that are in that storage pool. If a Slicestor node goes offline and misses some newly
written slice, it rebuilds any missing slices when it comes back online. The Slicestor node
reads the content of other Slicestor nodes and re-creates the missing slice.
29
Integrity agent
This agent checks the integrity of slices on the Slicestor nodes.
Each Slicestor node runs an integrity agent, which reads every slice that is stored on that
node and compares it against a 4-byte CRC32 checksum, which is stored alongside that
slice. If the checksum does not match, the slice is deprecated and then rebuilt from the
other slices for that object. This check occurs every time a client system attempts to read
an object. If the check fails, the object and its component slices are rebuilt.
Scalability
IBM COS was tested at web-scale with production deployments that exceed hundreds of
petabytes of capacity, and can scale to exabytes.
What this means for you: IBM Cloud Object Storage is built for large data sets and can
scale to exabytes while maintaining availability, reliability, manageability, and
cost-effectiveness without any compromise. The scalability is virtually unlimited.
IBM COS security is separated into appliance, user authentication, network, data, retention
enabled vaults, client, and object security.
Appliance security
The underlying IBM COS Manager, IBM COS Accesser, and IBM COS Slicestor appliances
have multiple levels of security:
Stock open source Linux distribution and Linux kernel that is pared down to the minimal
necessary functions enabled.
Industry standard 64-bit processors that support the NX (No eXecute) bit, which prevents
a buffer overflow from turning into remote code execution vulnerabilities.
Each appliance has its own internal firewall, which restricts network access to everything
but the appliance’s critical services.
Each appliance is monitored in real time by the Manager, which receives SSL-encrypted
system logs.
Supports certificate-based digital signature of installation files to ensure that the content
have not been tampered with or corrupted between the time it was signed and received by
the user.
User authentication
Authentication allows users and storage administrators to identify themselves to other entities
in a trusted manner. Consider the following points:
Credentials that are exchanged between appliances are encrypted by using TLS.
Customers can turn off TLS between the Accesser and Slicestor appliances if it is not
required. For those customers, the confidentiality of credentials is still protected. The
appliances encrypt the credentials by using a key negotiated that uses the Diffie-Hellman
protocol. This configuration ensures that no one can sniff network traffic to intercept
credentials.
Passwords that are stored in the credential database are not stored in a raw form; rather,
they are salted and hashed to prevent direct exposure or the application of rainbow tables
if password database falls into the wrong hands.
IBM COS supports direct integration with an Active Directory (AD) server. Passwords are
encrypted in flight over Lightweight Directory Access Protocol (LDAP) if supported by the
AD server.
31
User authentication is supported by digital certificates. Authenticating by using digital
certificates is superior to authentication with passwords because the private credential
(the private key) is never divulged during the authentication process.
All user operations that are performed in the GUI and on the appliances are audited to
provide accountability and traceability.
Supports local named accounts for GUI and all appliances for individual accountability.
Supports user name, password, and SSH keys for SSH authentication.
Network security
All network traffic that flows within IBM COS is encrypted. Authentication with SSL/TLS
requires the use of digital certificates and these certificates must be verified as belonging to a
valid node. Consider the following points:
Appliances are given a signed digital certificate at the time they are approved into IBM
COS.
Approval requires an IBM COS administrator to log in to the management interface, view
the request, and authorize. The administrator can see the IP address, MAC address,
fingerprint of the appliance and verify that each is valid before approval into IBM COS.
After approval, the appliance is granted a certificate by the internal certificate authority
(CA) for IBM COS. All appliances in IBM COS trust this CA and any appliance that owns a
valid certificate that is signed by this CA.
The use of external CAs is supported.
Appliance certificates can be revoked by the Manager for any reason by using a client
revocation list (CRL), which is periodically polled by every appliance in IBM COS.
All configuration information, such as Access Control Lists (ACLs), vault quotas, device IP
addresses, is published over HTTPS.
Data security
Securing data has three main requirements: confidentiality, integrity, and availability. All three
of these requirements are satisfied by using SecureSlice. Consider the following points:
SecureSlice is a standard product feature that is enabled by default and all data is
encrypted.
SecureSlice can be configured to use any of the following combinations of encryption and
data integrity algorithms:
– RC4-128 encryption with MD5-128 hash for data integrity
– AES-128 encryption with MD5-128 hash for data integrity
– AES-256 encryption with SHA-256 hash for data integrity
– AES-GCM-256 encryption with SHA-256 hash for data integrity (default and
recommended)
SecureSlice does not require an external key management system.
SecureSlice supports Server-Side Encryption with Customer Provided Keys (SSE-C).
To prevent accidental deletion of data by a single administrator, vault deletion
authorization can be enabled where a second administrator must approve the delete
request.
Objects that are stored in retention enabled buckets are protected objects that include
associated retention periods and optional legal holds. Protected objects cannot be deleted or
modified until the retention period expires and all legal holds on the object are removed.
With this feature, IBM COS is natively compliant with the following key standard and
compliance requirements:
Securities and Exchange Commission (SEC) Rule 17a-4(f) (for more information, see this
web page):
– SEC 17a-4(f)(2)(ii)(A): Protect data from deletion and overwriting.
– SEC 17a-4(f)(2)(ii)(B): Automatically verify that the storage system properly stored the
data.
– SEC 17a-4(f)(2)(ii)(C): Manage retention periods for the objects.
– SEC 17a-4(f)(2)(ii)(D): Download indexes and records.
– SEC 17a-4(f)(2)(iii/v): Store duplicate copies and provide audit capabilities.
Financial Industry Regulatory Authority (FINRA) Rule 4511, which references
requirements of SEC Rule 17a-4(f).
Commodity Futures Trading Commission (CFTC) Rule 1.31(b)-(c).
Additional reference: For more information about deployment details and feature
limitations, see IBM Documentation.
What this means for you: IBM Cloud Object Storage is designed with a high level of
security in mind to protect your data from security breaches. From built-in encryption of
data at rest and in motion, to a range of authentication and access control options, the IBM
Cloud Object Storage solution includes a wide range of capabilities that are designed to
help meet your security requirements. These security capabilities are implemented to help
enable better security, without compromising scalability, availability, ease of management,
or economic efficiency.
Object expiration feature provides the capability of automatic deletion of objects, based on a
lifecycle policy that is configured on the bucket level. This feature provides the following
benefits:
33
Allows the customer to manage their storage costs by scheduling periodic deletion of data
that is no longer needed.
Provides a better way to manage object deletion without external tools.
Adds compatibility with S3 API features; that is, applications that use S3 object-expiration
can be used with IBM COS systems.
Provides improvements in performance when deleting objects that use this built-in
object-expiration feature instead of listing and deleting objects manually.
IBM COS Manager provides the monitoring and alerting capability for this feature.
COS API supports blocking or ignoring public access for objects in buckets.
Additional reference: For more information about deployment details and feature
limitations, see IBM Documentation.
Authors
This paper was produced by a team of specialists from around the world.
Alexander Gavrin is a certified Senior IT Architect of Hybrid Cloud Build team, based in IBM
Russia. He has more than 20 years experience working in IBM on various positions. He
started as a developer, then worked as team lead, senior developer, and architect on various
projects in IBM Services. His main expertise is focused on hybrid cloud technologies, artificial
intelligence, search and data processing. Alexander is an author of a patent. He holds a
master degree of Applied Mathematics in Moscow Power Engineering Institute.
Bradley Leonard is an IBM Certified IT Specialist with IBM Cloud And Cognitive Software in
the United States. He has over 24 years of experience working in cloud computing, enterprise
system management, testing, and DevOps. Bradley is focused on IBM Cloud Object Storage
as a Solution Architect with the World Wide Cloud Object Storage Services team. He holds a
Bachelor’s degree in Computer Science from California Polytechnic State University, San Luis
Obispo.
Hao Jia is a Lead Engineer within IBM Cloud Object Storage (COS) Level 3 Support
organization based in Chicago, Illinois, US. He has 10 years of IT experience in Cloud Object
Storage and Telecommunications. He holds a Master’s degree in Electrical and Computer
Engineering from Illinois Institute of Technology in Chicago. Hao has been supporting the
integration of IBM COS into IBM Cloud since 2016. He is leading the effort to provide strategic
customers of IBM Cloud and COS on-premises with the best technical support experience.
Johan Verstrepen is an IBM Certified IT Specialist. He worked 9 years in the AIX/VIOS and
Linux Support team in IBM Belgium as a virtualization and networking specialist. Since then,
he moved to IBM Switzerland and has been working in the IBM COS team for 3 years,
providing IBM Cloud Object Storage Product Support and Services. Johan has a Bachelor
Degree in Telecommunications, received at the Groep-T campus of the University of Leuven
(Belgium).
Jussi Lehtinen is a Solutions Architect for IBM File and Object Storage working for IBM
Systems in Europe. He has over 30 years of experience working in IT; last 20 years with
Storage. He holds a Bachelor’s degree in Management and Computer Studies from Webster
University in Geneva, Switzerland.
Lars Lauber is a File and Object Storage Client Technical Specialist with IBM Cloud in
Germany. He has 3 years of experience working with IBM Cloud Object Storage. He holds a
Patrik Jelinko is a Client Technical Specialist with IBM Australia. He has 20 years of
experience as an IT infrastructure specialist, and started working with object-based storage
systems more than 15 years ago. Patrik is focused on software-defined storage and modern
data protection solutions. He holds a Master’s degree in IT Engineering from Budapest
University of Technology and Economics.
Raj Shah is an L3 Product Engineer with the IBM Cloud Object Storage (COS) organization
based in Chicago, Illinois, US. He has 9 years of IT experience working in cloud computing,
software development, and customer success. He holds a Master's degree in Embedded
Systems from Manipal University, India. He is also an active contributor to Cloud Native
Computing Foundation open-source project.
Steven Pratt is a Solution Architect with the IBM Cloud Object Storage (COS) Solutions and
Integration team based in San Francisco, California, US. He has over 20 years of IT
experience in Cloud Object Storage, Big Data, Predictive Analytics, and Bioinformatics. He
holds a Bachelor's degree in Biology from Sonoma State University. Steve spent most of his
career in various Product Support roles, and recently added Solution Consulting, Training,
and Pre-sales support to his portfolio.
Vasfi Gucer is a project leader with the IBM Systems WW Client Experience Center. He has
more than 20 years of experience in the areas of systems management, networking
hardware, and software. He writes extensively and teaches IBM classes worldwide about IBM
products. His focus has been primarily on storage and cloud computing for the last eight
years. Vasfi is also an IBM Certified Senior IT Specialist, Project Management Professional
(PMP), IT Infrastructure Library (ITIL) V2 Manager, and ITIL V3 Expert.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
35
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
Accesser® IBM Spectrum® Slicestor®
Aspera® Redbooks®
IBM® Redbooks (logo) ®
Merge Healthcare, are trademarks or registered trademarks of Merge Healthcare Inc., an IBM Company.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
REDP-5537-02
ISBN 0738458937
Printed in U.S.A.
®
ibm.com/redbooks