SlideShare a Scribd company logo
CEPH PERFORMANCE
Profiling and Reporting
Brent Compton, Director Storage Solution Architectures
Kyle Bader, Sr Storage Architect
Veda Shankar, Sr Storage Architect
HOW WELL CAN CEPH
PERFORM?
WHICH OF MY
WORKLOADS CAN IT
HANDLE?
HOW WILL CEPH
PERFORM ON MY
SERVERS?
INSERT DESIGNATOR, IF NEEDED2
Questions that continually surface
FAQ FROM THE COMMUNITY
PERCEIVED RANGE
OF CEPH PERF
ACTUAL (MEASURED) RANGE
OF CEPH PERF
INSERT DESIGNATOR, IF NEEDED3
Finding the right server and network config for the job
HOW WELL CAN CEPH PERFORM?
https://github.com/ceph/ceph-brag (email pmcgarry@redhat.com for access)
INSERT DESIGNATOR, IF NEEDED4
Ceph performance leaderboard (ceph-brag) coming to ceph.com
INVITATION TO BE PART OF THE ANSWER
INSERT DESIGNATOR, IF NEEDED5
Posted throughput results
A LEADERBOARD FOR CEPH PERF RESULTS
INSERT DESIGNATOR, IF NEEDED6
Looking for Beta submitters prior to general availability on Ceph.com
LEADERBOARD ATTRIBUTION AND DETAILS
INSERT DESIGNATOR, IF NEEDED7
Still under construction
EMERGING LEADERBOARD FOR IOPS
OpenStack Starter
64 TB
S
256TB +
M
1PB +
L
2PB+
MySQL Perf Node
IOPs optimized
Digital Media Perf Node
Throughput
optimized
Archive Node
Cost-Capacity
optimized
MAPPING CONFIGS TO WORKLOAD IO
CATEGORIES
INSERT DESIGNATOR, IF NEEDED9
Some pertinent measures
• MBps
• $/MBps
• MBps/provisioned-TB
• Watts/MBps
• MTTR (self-heal from server failure)
Range of MBps measured with Ceph on different server configs
DIGITAL MEDIA PERF NODES
0
100
200
300
400
500
HDD
sample
SSD
sample
4M Read
MBps per
Drive
4M Write
MBps per
Drive
Sequential Read Throughput vs IO Block Size
THROUGHPUT PER OSD DEVICE (READ)
INSERT DESIGNATOR, IF NEEDED10
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
100.00
64 512 1024 4096
MB/secperOSDDevice
IO Block Size
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
D51PH-1ULH - 12xOSD+0xSSD, 2x10G (EC3:2)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (EC2:2)
T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G (EC2:2)
Sequential Write Throughput vs IO Block Size
THROUGHPUT PER OSD DEVICE (WRITE)
INSERT DESIGNATOR, IF NEEDED11
0.00
5.00
10.00
15.00
20.00
25.00
64 512 1024 4096
MB/secperOSDDevice
IO Block Size
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (EC3:2)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G
(3xRep)
T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G
(EC2:2)
T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G
(EC2:2)
Sequential Throughput vs Different Server Sizes
SERVER SCALABILITY
INSERT DESIGNATOR, IF NEEDED12
0
10
20
30
40
50
60
70
80
90
100
12 Disks / OSDs (D51PH) 35 Disks / OSDs (T21P)
MBytes/sec/disk
Rados-4M-seq-read/Disk
Rados-4M-seq-write/Disk
Sequential Throughput vs Different Protection Methods (Replication v. Erasure-coding)
DATA PROTECTION METHODS
INSERT DESIGNATOR, IF NEEDED13
0
10
20
30
40
50
60
70
80
90
100
Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk
MBytes/sec/disk
D51PH-1ULH - 12xOSD+0xSSD,
2x10G (EC3:2)
D51PH-1ULH - 12xOSD+3xSSD,
2x10G (EC3:2)
D51PH-1ULH - 12xOSD+3xSSD,
2x10G (3xRep)
Sequential IO Latency vs Different Journal Approaches
JOURNALING
INSERT DESIGNATOR, IF NEEDED14
0
500
1000
1500
2000
2500
3000
3500
4000
Rados-4M-Seq-Reads Latency Rados-4M-Seq-Writes Latency
Latencyinmsec
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G
(3xRep)
T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G
(3xRep)
Sequential Throughput vs Different Network Bandwidth
NETWORK
INSERT DESIGNATOR, IF NEEDED15
0
10
20
30
40
50
60
70
80
90
100
Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk
MBytes/sec/disk
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G
(3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 10G+10G
(3xRep)
Sequential Throughput v. Different OSD Media Types (All-flash v. Magnetic)
MEDIA TYPE
16
Different Configs vs $/MBps (lowest = best)
PRICE/PERFORMANCE
17
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep)
$/MBps
Price/Perf (w)
Price/Perf (r)
Different Configs vs $/MBps (lowest = best)
PRICE/PERFORMANCE
18
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep)
$/MBps
Price/Perf (w)
Price/Perf (r)
INSERT DESIGNATOR, IF NEEDED19
Some pertinent measures
• MySQL Sysbench requests/sec
• IOPS (4K, 16K random)
• $/IOP
• IOPS/provisioned-GB
• Watts/IOP
Range of IOPS measured with Ceph on different server configs
MYSQL PERF NODES
0
10000
20000
30000
40000
50000
60000
HDD
sample
SSD
sample
4K Read
IOPS per
Drive
4K Write
IOPS per
Drive
AWS provisioned-IOPS v. Ceph all-flash configs
SYSBENCH REQUEST/SEC
20
0
10000
20000
30000
40000
50000
60000
70000
80000
P-IOPS
m4.4XL
Ceph cluster
cl: 16 vcpu/64MB
(1 instance,
14% capacity)
Ceph cluster
cl: 16 vcpu/64MB
(10 instances,
87% capacity)
Sysbench Read Req/sec
Sysbench Write Req/sec
Sysbench 70/30 R/W
Req/sec
AWS use of IOPS/GB throttles
GETTING DETERMINISTIC IOPS
21
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
P-IOPS
m4.4XL
P-IOPS
r3.2XL
GP-SSD
r3.2XL
MySQL IOPS/GB, Sysbench Reads
MySQL IOPS/GB, Sysbench Writes
Ceph IOPS/GB varying with instance quantity and cluster capacity utilization
MYSQL INSTANCES AND CLUSTER
CAPACITY
22
26
87
19
0
10
20
30
40
50
60
70
80
90
100
P-IOPS
m4.4XL
Ceph cluster
cl: 16 vcpu/64MB
(1 instance,
14% capacity)
Ceph cluster
cl: 16 vcpu/64MB
(10 instances,
87% capacity)
Collect baseline measures
INSERT DESIGNATOR, IF NEEDED23
METHODOLOGY: BASELINING
1. Determine benchmark measures most representative of business need
2. Determine cluster access method (block, object, file)
3. Collect baseline measures
1. Look-up manufacturer drive specifications (IOPS, MBps, latency)
2. Single-node IO baseline (max IOPS, MBps to all drives concurrently)
3. Network baseline (consistent bandwidth across full route mesh)
4. Rados baseline (max sequential throughput per drive)
5. RBD baseline (max IOPS per drive)
6. Sysbench baseline (max DB requests/sec per drive)
7. RGW baseline (max object OP/sec per drive)
4. Calculate drive efficiency at each level up the stack
Towards deterministic performance
INSERT DESIGNATOR, IF NEEDED24
METHODOLOGY: WATERMARKS
1. Identify IOPS/GB at 35% and 70% cluster utilization (with corresponding MySQL instances)
2. Identify MBps/TB at 35% and 70% cluster utilization
3. Determine target IOPS/GB or MBps at target cluster utilization
4. (experimential) Set block device IO throttles to cap consumption by any single client
Towards comparable results
INSERT DESIGNATOR, IF NEEDED25
COMMON TOOLS
1. CBT – Ceph Benchmarking Tool
https://github.com/ceph/ceph-brag (email pmcgarry@redhat.com for access)
INSERT DESIGNATOR, IF NEEDED26
Ceph performance leaderboard (ceph-brag) coming to ceph.com
INVITATION TO BE PART OF THE ANSWER
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHatNews
THANK YOU
4K Random Write IOPS v. Different Controllers and software configs
RAID CONTROLLER WRITE-BACK (HDD
OSDS)
28
Ad

More Related Content

What's hot (20)

Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
Ceph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
Ceph Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
Danielle Womboldt
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
Patrick McGarry
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Community
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Community
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Danielle Womboldt
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
inwin stack
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Community
 
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangAccelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Ceph Community
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
Ceph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Danielle Womboldt
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
Ceph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
Ceph Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
Danielle Womboldt
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Community
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Community
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Danielle Womboldt
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
inwin stack
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Community
 
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangAccelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Ceph Community
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
Ceph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Danielle Womboldt
 

Viewers also liked (7)

Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Patrick McGarry
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
Ceph Community
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
Alex Lau
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
My SQL on Ceph
My SQL on CephMy SQL on Ceph
My SQL on Ceph
Red_Hat_Storage
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
Mirantis
 
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Patrick McGarry
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
Ceph Community
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
Alex Lau
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
Mirantis
 
Ad

Similar to Ceph Performance Profiling and Reporting (20)

Ceph
CephCeph
Ceph
Hien Nguyen Van
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT
Ceph Community
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
Hitoshi Sato
 
9/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'169/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'16
Kangaroot
 
Sql server 2016 it just runs faster sql bits 2017 edition
Sql server 2016 it just runs faster   sql bits 2017 editionSql server 2016 it just runs faster   sql bits 2017 edition
Sql server 2016 it just runs faster sql bits 2017 edition
Bob Ward
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Community
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
Red_Hat_Storage
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
David Grier
 
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Meng-Ru (Raymond) Tsai
 
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
Philippe Fierens
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
Vijayendra Shamanna
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
Yongseok Oh
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
Nicolas Poggi
 
Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例
Kazuhito Ohkawa
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs Faster
Bob Ward
 
Oracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open WorldOracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open World
Paul Marden
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
DataWorks Summit/Hadoop Summit
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
Nicolas Poggi
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT
Ceph Community
 
9/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'169/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'16
Kangaroot
 
Sql server 2016 it just runs faster sql bits 2017 edition
Sql server 2016 it just runs faster   sql bits 2017 editionSql server 2016 it just runs faster   sql bits 2017 edition
Sql server 2016 it just runs faster sql bits 2017 edition
Bob Ward
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Community
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
Red_Hat_Storage
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
David Grier
 
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Meng-Ru (Raymond) Tsai
 
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
Philippe Fierens
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
Yongseok Oh
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
Nicolas Poggi
 
Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例
Kazuhito Ohkawa
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs Faster
Bob Ward
 
Oracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open WorldOracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open World
Paul Marden
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
Nicolas Poggi
 
Ad

Recently uploaded (20)

TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
Andre Hora
 
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRYLEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
NidaFarooq10
 
Who Watches the Watchmen (SciFiDevCon 2025)
Who Watches the Watchmen (SciFiDevCon 2025)Who Watches the Watchmen (SciFiDevCon 2025)
Who Watches the Watchmen (SciFiDevCon 2025)
Allon Mureinik
 
Download YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full ActivatedDownload YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full Activated
saniamalik72555
 
The Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdfThe Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdf
drewplanas10
 
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Orangescrum
 
Download Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With LatestDownload Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With Latest
tahirabibi60507
 
Scaling GraphRAG: Efficient Knowledge Retrieval for Enterprise AI
Scaling GraphRAG:  Efficient Knowledge Retrieval for Enterprise AIScaling GraphRAG:  Efficient Knowledge Retrieval for Enterprise AI
Scaling GraphRAG: Efficient Knowledge Retrieval for Enterprise AI
danshalev
 
Solidworks Crack 2025 latest new + license code
Solidworks Crack 2025 latest new + license codeSolidworks Crack 2025 latest new + license code
Solidworks Crack 2025 latest new + license code
aneelaramzan63
 
Exploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the FutureExploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the Future
ICS
 
Kubernetes_101_Zero_to_Platform_Engineer.pptx
Kubernetes_101_Zero_to_Platform_Engineer.pptxKubernetes_101_Zero_to_Platform_Engineer.pptx
Kubernetes_101_Zero_to_Platform_Engineer.pptx
CloudScouts
 
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage DashboardsAdobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
BradBedford3
 
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...
Eric D. Schabell
 
PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025
mu394968
 
Automation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath CertificateAutomation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath Certificate
VICTOR MAESTRE RAMIREZ
 
How to Optimize Your AWS Environment for Improved Cloud Performance
How to Optimize Your AWS Environment for Improved Cloud PerformanceHow to Optimize Your AWS Environment for Improved Cloud Performance
How to Optimize Your AWS Environment for Improved Cloud Performance
ThousandEyes
 
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...
Egor Kaleynik
 
Adobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest VersionAdobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest Version
kashifyounis067
 
WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)
sh607827
 
Expand your AI adoption with AgentExchange
Expand your AI adoption with AgentExchangeExpand your AI adoption with AgentExchange
Expand your AI adoption with AgentExchange
Fexle Services Pvt. Ltd.
 
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
Andre Hora
 
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRYLEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
NidaFarooq10
 
Who Watches the Watchmen (SciFiDevCon 2025)
Who Watches the Watchmen (SciFiDevCon 2025)Who Watches the Watchmen (SciFiDevCon 2025)
Who Watches the Watchmen (SciFiDevCon 2025)
Allon Mureinik
 
Download YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full ActivatedDownload YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full Activated
saniamalik72555
 
The Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdfThe Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdf
drewplanas10
 
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Orangescrum
 
Download Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With LatestDownload Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With Latest
tahirabibi60507
 
Scaling GraphRAG: Efficient Knowledge Retrieval for Enterprise AI
Scaling GraphRAG:  Efficient Knowledge Retrieval for Enterprise AIScaling GraphRAG:  Efficient Knowledge Retrieval for Enterprise AI
Scaling GraphRAG: Efficient Knowledge Retrieval for Enterprise AI
danshalev
 
Solidworks Crack 2025 latest new + license code
Solidworks Crack 2025 latest new + license codeSolidworks Crack 2025 latest new + license code
Solidworks Crack 2025 latest new + license code
aneelaramzan63
 
Exploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the FutureExploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the Future
ICS
 
Kubernetes_101_Zero_to_Platform_Engineer.pptx
Kubernetes_101_Zero_to_Platform_Engineer.pptxKubernetes_101_Zero_to_Platform_Engineer.pptx
Kubernetes_101_Zero_to_Platform_Engineer.pptx
CloudScouts
 
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage DashboardsAdobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
BradBedford3
 
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...
Mastering Fluent Bit: Ultimate Guide to Integrating Telemetry Pipelines with ...
Eric D. Schabell
 
PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025
mu394968
 
Automation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath CertificateAutomation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath Certificate
VICTOR MAESTRE RAMIREZ
 
How to Optimize Your AWS Environment for Improved Cloud Performance
How to Optimize Your AWS Environment for Improved Cloud PerformanceHow to Optimize Your AWS Environment for Improved Cloud Performance
How to Optimize Your AWS Environment for Improved Cloud Performance
ThousandEyes
 
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...
Egor Kaleynik
 
Adobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest VersionAdobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest Version
kashifyounis067
 
WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)
sh607827
 
Expand your AI adoption with AgentExchange
Expand your AI adoption with AgentExchangeExpand your AI adoption with AgentExchange
Expand your AI adoption with AgentExchange
Fexle Services Pvt. Ltd.
 

Ceph Performance Profiling and Reporting

  • 1. CEPH PERFORMANCE Profiling and Reporting Brent Compton, Director Storage Solution Architectures Kyle Bader, Sr Storage Architect Veda Shankar, Sr Storage Architect
  • 2. HOW WELL CAN CEPH PERFORM? WHICH OF MY WORKLOADS CAN IT HANDLE? HOW WILL CEPH PERFORM ON MY SERVERS? INSERT DESIGNATOR, IF NEEDED2 Questions that continually surface FAQ FROM THE COMMUNITY
  • 3. PERCEIVED RANGE OF CEPH PERF ACTUAL (MEASURED) RANGE OF CEPH PERF INSERT DESIGNATOR, IF NEEDED3 Finding the right server and network config for the job HOW WELL CAN CEPH PERFORM?
  • 4. https://github.com/ceph/ceph-brag (email [email protected] for access) INSERT DESIGNATOR, IF NEEDED4 Ceph performance leaderboard (ceph-brag) coming to ceph.com INVITATION TO BE PART OF THE ANSWER
  • 5. INSERT DESIGNATOR, IF NEEDED5 Posted throughput results A LEADERBOARD FOR CEPH PERF RESULTS
  • 6. INSERT DESIGNATOR, IF NEEDED6 Looking for Beta submitters prior to general availability on Ceph.com LEADERBOARD ATTRIBUTION AND DETAILS
  • 7. INSERT DESIGNATOR, IF NEEDED7 Still under construction EMERGING LEADERBOARD FOR IOPS
  • 8. OpenStack Starter 64 TB S 256TB + M 1PB + L 2PB+ MySQL Perf Node IOPs optimized Digital Media Perf Node Throughput optimized Archive Node Cost-Capacity optimized MAPPING CONFIGS TO WORKLOAD IO CATEGORIES
  • 9. INSERT DESIGNATOR, IF NEEDED9 Some pertinent measures • MBps • $/MBps • MBps/provisioned-TB • Watts/MBps • MTTR (self-heal from server failure) Range of MBps measured with Ceph on different server configs DIGITAL MEDIA PERF NODES 0 100 200 300 400 500 HDD sample SSD sample 4M Read MBps per Drive 4M Write MBps per Drive
  • 10. Sequential Read Throughput vs IO Block Size THROUGHPUT PER OSD DEVICE (READ) INSERT DESIGNATOR, IF NEEDED10 0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00 100.00 64 512 1024 4096 MB/secperOSDDevice IO Block Size D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) D51PH-1ULH - 12xOSD+0xSSD, 2x10G (EC3:2) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (EC2:2) T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G (EC2:2)
  • 11. Sequential Write Throughput vs IO Block Size THROUGHPUT PER OSD DEVICE (WRITE) INSERT DESIGNATOR, IF NEEDED11 0.00 5.00 10.00 15.00 20.00 25.00 64 512 1024 4096 MB/secperOSDDevice IO Block Size D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) D51PH-1ULH - 12xOSD+3xSSD, 2x10G (EC3:2) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G (EC2:2) T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G (EC2:2)
  • 12. Sequential Throughput vs Different Server Sizes SERVER SCALABILITY INSERT DESIGNATOR, IF NEEDED12 0 10 20 30 40 50 60 70 80 90 100 12 Disks / OSDs (D51PH) 35 Disks / OSDs (T21P) MBytes/sec/disk Rados-4M-seq-read/Disk Rados-4M-seq-write/Disk
  • 13. Sequential Throughput vs Different Protection Methods (Replication v. Erasure-coding) DATA PROTECTION METHODS INSERT DESIGNATOR, IF NEEDED13 0 10 20 30 40 50 60 70 80 90 100 Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk MBytes/sec/disk D51PH-1ULH - 12xOSD+0xSSD, 2x10G (EC3:2) D51PH-1ULH - 12xOSD+3xSSD, 2x10G (EC3:2) D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
  • 14. Sequential IO Latency vs Different Journal Approaches JOURNALING INSERT DESIGNATOR, IF NEEDED14 0 500 1000 1500 2000 2500 3000 3500 4000 Rados-4M-Seq-Reads Latency Rados-4M-Seq-Writes Latency Latencyinmsec T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G (3xRep)
  • 15. Sequential Throughput vs Different Network Bandwidth NETWORK INSERT DESIGNATOR, IF NEEDED15 0 10 20 30 40 50 60 70 80 90 100 Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk MBytes/sec/disk T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 10G+10G (3xRep)
  • 16. Sequential Throughput v. Different OSD Media Types (All-flash v. Magnetic) MEDIA TYPE 16
  • 17. Different Configs vs $/MBps (lowest = best) PRICE/PERFORMANCE 17 D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) $/MBps Price/Perf (w) Price/Perf (r)
  • 18. Different Configs vs $/MBps (lowest = best) PRICE/PERFORMANCE 18 D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) $/MBps Price/Perf (w) Price/Perf (r)
  • 19. INSERT DESIGNATOR, IF NEEDED19 Some pertinent measures • MySQL Sysbench requests/sec • IOPS (4K, 16K random) • $/IOP • IOPS/provisioned-GB • Watts/IOP Range of IOPS measured with Ceph on different server configs MYSQL PERF NODES 0 10000 20000 30000 40000 50000 60000 HDD sample SSD sample 4K Read IOPS per Drive 4K Write IOPS per Drive
  • 20. AWS provisioned-IOPS v. Ceph all-flash configs SYSBENCH REQUEST/SEC 20 0 10000 20000 30000 40000 50000 60000 70000 80000 P-IOPS m4.4XL Ceph cluster cl: 16 vcpu/64MB (1 instance, 14% capacity) Ceph cluster cl: 16 vcpu/64MB (10 instances, 87% capacity) Sysbench Read Req/sec Sysbench Write Req/sec Sysbench 70/30 R/W Req/sec
  • 21. AWS use of IOPS/GB throttles GETTING DETERMINISTIC IOPS 21 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 P-IOPS m4.4XL P-IOPS r3.2XL GP-SSD r3.2XL MySQL IOPS/GB, Sysbench Reads MySQL IOPS/GB, Sysbench Writes
  • 22. Ceph IOPS/GB varying with instance quantity and cluster capacity utilization MYSQL INSTANCES AND CLUSTER CAPACITY 22 26 87 19 0 10 20 30 40 50 60 70 80 90 100 P-IOPS m4.4XL Ceph cluster cl: 16 vcpu/64MB (1 instance, 14% capacity) Ceph cluster cl: 16 vcpu/64MB (10 instances, 87% capacity)
  • 23. Collect baseline measures INSERT DESIGNATOR, IF NEEDED23 METHODOLOGY: BASELINING 1. Determine benchmark measures most representative of business need 2. Determine cluster access method (block, object, file) 3. Collect baseline measures 1. Look-up manufacturer drive specifications (IOPS, MBps, latency) 2. Single-node IO baseline (max IOPS, MBps to all drives concurrently) 3. Network baseline (consistent bandwidth across full route mesh) 4. Rados baseline (max sequential throughput per drive) 5. RBD baseline (max IOPS per drive) 6. Sysbench baseline (max DB requests/sec per drive) 7. RGW baseline (max object OP/sec per drive) 4. Calculate drive efficiency at each level up the stack
  • 24. Towards deterministic performance INSERT DESIGNATOR, IF NEEDED24 METHODOLOGY: WATERMARKS 1. Identify IOPS/GB at 35% and 70% cluster utilization (with corresponding MySQL instances) 2. Identify MBps/TB at 35% and 70% cluster utilization 3. Determine target IOPS/GB or MBps at target cluster utilization 4. (experimential) Set block device IO throttles to cap consumption by any single client
  • 25. Towards comparable results INSERT DESIGNATOR, IF NEEDED25 COMMON TOOLS 1. CBT – Ceph Benchmarking Tool
  • 26. https://github.com/ceph/ceph-brag (email [email protected] for access) INSERT DESIGNATOR, IF NEEDED26 Ceph performance leaderboard (ceph-brag) coming to ceph.com INVITATION TO BE PART OF THE ANSWER
  • 28. 4K Random Write IOPS v. Different Controllers and software configs RAID CONTROLLER WRITE-BACK (HDD OSDS) 28