SlideShare a Scribd company logo
Ceph All-Flash Array Design
Based on NUMA Architecture
QCT (QuantaCloud Technology)
Marco Huang, Technical Manager
Becky Lin, ProgramManager
• All-flash Ceph and Use Cases
• QCT QxStor All-flash Ceph for IOPS
• QCT Lab Environment Overview & Detailed Architecture
• Importance of NUMA and Proof Points
Agenda
2 QCTCONFIDENTIAL
QCT Powers Most of Cloud Services
Global Tier 1 Hyperscale Datacenters, Telcos and Enterprises
3 QCTCONFIDENTIAL
• QCT (Quanta Cloud Technology) was a
subsidiary of Quanta Computer
• Quanta Computer is a fortune global 500
company with over $32B revenue
Why All-flash Storage?
• Falling flash prices: Flashprices fell
as much as 75% over the 18 months
leading up to mid-2016 and the trend
continues.
“TechRepublic: 10 storage trends to watch in
2016”
• Flashis 10x cheaper than DRAM:
withpersistence and highcapacity
“NetApp”
• Flashis 100x cheaper than disk:
pennies per IOPS vs. dollars per IOPS
“NetApp”
• Flashis 1000x fasterthan disk:
latency drops from milliseconds to
microseconds
“NetApp”
€$¥
• Flashperformance advantage:
HDDs have anadvantage in $/GB,
while flashhas anadvantage in
$/IOPS.
“TechTarget: Hybrid storage arrays vs. all-
flash arrays: A little flash or a lot?”
• NVMe-based storage trend: 60%
of enterprise storage appliances
will have NVMe bays by 2020
“G2M Research”
Requiresub-millisecond latencyNeed performance-optimized
storageformission-critical apps
Flash capacity gains while the
price drops
QCTCONFIDENTIAL
All-flash Ceph Use Cases
5 QCTCONFIDENTIAL
QCT QxStor Red Hat Ceph Storage Edition
Optimized for workloads
ThroughputOptimized
• Densest 1U Ceph building block
• Smaller failure domain
• 3x SSDS3710 journal
• 12x HDD 7.2krpm
• Obtain best throughput &
density at once
• Scale at high scale 700TB
• 2x 2x NVMe P3700 journal
• 2x 35xHDD
• Block or Object Storage, Video, Audio, Image,
Streaming media, Big Data
• 3x replication
USECASE
QxStor RCT-400QxStor RCT-200
Cost/Capacity Optimized
QxStor RCC-400
• Maximize storage capacity
• Highest density 560TB* raw
capacity per chassis
• 2x 35xHDD
• Object storage, Archive,
Backup, Enterprise Dropbox
• Erasure coding
D51PH-1ULH T21P-4U
* Optional model, oneMB per chassis, can support 620TB raw capacity
IOPS Optimized
QxStor RCI-300
• All FlashDesign
• Lowest latency
• 4x P3520 2TB or 4x P3700
1.6TB
• Database, HPC, Mission
Critical Applications
• 2x replication
D51BP-1U
6
T21P-4U
7
QCT QxStor RCI-300
All Flash Design Ceph for I/O Intensive Workloads
SKU1: All flash Ceph - theBest IOPS SKU
• Ceph Storage Server:D51BP-1U
• CPU: 2x E5-2995 v4 or plus
• RAM: 128GB
• NVMe SSD:4x P3700 1.6TB
• NIC: 10GbE dual port or 40GbE dual port
SKU2: All flash Ceph - IOPS/Capacity Balanced SKU
(best TCO, as of today)
• Ceph Storage Server:D51BP-1U
• CPU: 2x E5-2980 v4 or higher cores
• RAM: 128GB
• NVMe SSD:4x P3520 2TB
• NIC: 10GbE dual port or 40GbE dual port
NUMA Balanced Ceph Hardware
Highest IOPS & Lowest Latency
Optimized Ceph & HW Integration for
IOPS intensive workloads
QCTCONFIDENTIAL
8
NVMe: Best-in-Class IOPS, Lower/Consistent Latency
Lowest Latency of Standard Storage Interfaces
0
500000
100% Read 70% Read 0% Read
IOPS
IOPS - 4K RandomWorkloads
PCIe/NVMe SAS 12Gb/s
3x better IOPS vs SAS 12Gbps For the same #CPU cycles, NVMe delivers over 2X the IOPs of SAS!
Gen1 NVMe has 2 to 3x better Latency Consistency vs SAS
Test andSystem Configurations: PCI Express*
(PCIe*
)/NVM Express*
(NVMe) Measurements made onIntel® Core™ i7-3770S system @3.1GHzand 4GBMem running Windows*
Server 2012 Standard O/S, IntelPCIe/NVMe SSDs,data collected
by IOmeter*
tool. SAS Measurements from HGST Ultrastar*
SSD800M/1000M(SAS), SATA S3700 Series. For more complete information about performance andbenchmark results, visit http://www.intel.com/performance. Source: Intel
Internal Testing.
QCTCONFIDENTIAL
RADOS
LIBRADOS
QCT Lab Environment Overview
Ceph 1
….
Monitors
Storage Clusters
Interfaces RBD
(Block Storage)
Cluster Network
10GbE
Public Network
10GbE
Clients
Client 1 Client 2 Client 9 Client 10
Ceph 2 Ceph 5
….
QCTCONFIDENTIAL
5-Node all-NVMe Ceph Cluster
Dual-Xeon E5 2699v4@2.3GHz, 88 HT, 128GB DDR4
RHEL 7.3, 3.10, Red Hat Ceph 2.1
10x Client Systems
Dual-Xeon E5 2699v4@2.3GHz
88 HT, 128 GB DDR4
CephOSD1
CephOSD2
CephOSD3
CephOSD4
CephOSD16
…
NVMe3
NVMe2
NVMe4
NVMe1
20x 2TB P3520 SSDs
80 OSDs
2x Replication
19TB EffectiveCapacity
Tests at cluster fill-level
82%
CephRBDClient
Docker3
Sysbench Client
Docker4
Sysbench Client
Docker2 (krbd)
Percona DB Server
Docker1 (krbd)
Percona DB Server
ClusterNW10GbE
Sysbench Containers
16 vCPUs, 32GB RAM
FIO 2.8, Sysbench0.5
DB Containers
16 vCPUs, 32GB RAM,
200GB RBD volume,
100GB MySQL dataset
InnoDB bufcache25GB(25%)
Public NW 10 GbE
QuantaGrid D51BP-1U
QCTCONFIDENTIAL
Detailed System Architecture in QCT Lab
QCTCONFIDENTIAL
QCTCONFIDENTIAL
Stage Test Subject Benchmark tools Major Task
I/O Baseline Raw Disk FIO
Determinemaximum server IO
backplanebandwidth
Network Baseline NIC iPerf
Ensure consistent network
bandwidth between all nodes
Bare Metal RBD Baseline LibRBD FIO CBT
Use FIO RBD engine to test
performance using libRBD
Docker Container OLTP
Baseline
Percona DB + Sysbench Sysbench/OLTP
Establish number ofworkload-
driver VMs desired per client
Benchmark criteria:
1. Default: ceph.conf
2. Software Level Tuning: ceph.conftuned
3. Software + NUMA CPU Pinning: ceph.conftuned +NUMACPU Pinning
Benchmark Methodology
QCTCONFIDENTIAL
QCTCONFIDENTIAL
QCTCONFIDENTIAL
• Use faster media for journals, metadata
• Use recent Linux kernels
– blk-mq support packs big performance gains with NVMe media
– optimizations for non-rotational media
• Use tuned where available
– adaptive latency performance tuning [2]
• Virtual memory, network and storage tweaks
– use commonly recommended VM, network settings [1-4]
– enable rq_affinity, read ahead for NVMe devices
• BIOS and CPU performance governor settings
– disable C-states and enable Turbo-boost
– use “performance” CPU governor
Configuring All-flash Ceph
System Tuning for Low-latencyWorkloads
[1] https://wiki.mikejung.biz/Ubuntu_Performance_Tuning
[2] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/tuned-adm.html
[3] http://www.brendangregg.com/blog/2015-03-03/performance-tuning-linux-instances-on-ec2.html
[4] https://www.suse.com/documentation/ses-4/singlehtml/book_storage_admin/book_storage_admin.html
Parameter Default value Tuned value
objecter_inflight_ops 1024 102400 Objecter is responsible for sending requests to OSD.
objecter_inflight_ops_byte
s
104857600 1048576000
Objecter_inflight_ops/objecter_inflight_op_bytes tell objecter to
throttle outgoing ops according to budget (values based on
experiments in the Dumpling timeframe)
ms_dispatch_throttle_byte
s
104857600 1048576000
ms_dispatch_throttle_bytes throttle is to dispatch message size
for simple messenger (values based on experiments in the
Dumpling timeframe)
filestore_queue_max_ops 50 5000
filestore_queue_max_ops/filestore_queue_max_bytes throttle
are used to throttle inflight ops for filestore
filestore_queue_max_bytes 104857600 1048576000
These throttles are checked before sending ops to journal, so if
filestore does not get enough budget for current op, OSD op
thread will be blocked
Configuring All-flash Ceph
Ceph Tunables
QCTCONFIDENTIAL
14
Parameter
Default
value
Tuned
value
filestore_max_sync_interv
al
5 10
filestore_max_sync_interval controls the interval (in seconds) that sync
thread flush data from memory to disk. Use page cache - by default filestore
writes data to memory and sync thread is responsible for flushing data to
disk, then journal entries can be trimmed. Note that large
filestore_max_sync_interval can cause performance spike
filestore_op_threads 2 6
filestore_op_threads controls the number of filesystem operation threads
that execute in parallel.
If the storage backend is fast enough and has enough queues to support
parallel operations, it’s recommended to increase this parameter, given
there is enough CPU available
osd_op_threads 2 32
osd_op_threads controls the number of threads to service Ceph OSD
Daemon operations.
Setting this to 0 will disable multi-threading.
Increasing this number may increase the request processing rate
If the storage backend is fast enough and has enough queues to support
parallel operations, it’s recommended to increase this parameter, given
there is enough CPU available
Configuring All-flash Ceph
Ceph Tunables
QCTCONFIDENTIAL
Parameter
Default
value
Tuned value
journal_queue_max_ops 300 3000
journal_queue_max_bytes/journal_queue_max_op throttles
are to throttleinflight ops for journal
If journal does not get enough budget for current op, it will
block OSD op thread
journal_queue_max_bytes 33554432 1048576000
journal_max_write_entries 100 1000
journal_max_write_entries/journal_max_write_bytes
throttles areused to throttleops or bytes for every journal
write
Tweaking these two parameters maybe helpful for small
writes
journal_max_write_bytes 10485760 1048576000
Configuring All-flash Ceph
Ceph Tunables
QCTCONFIDENTIAL
• Leverage latest Intel NVMe technology to reach high
performance, bigger capacity, with lower $/GB
– Intel DC P3520 2TB raw performance: 375Kread IOPS, 26K write IOPS
• By using multiple OSD partitions, Ceph performance scales
linearly
– Reduces lock contention within a single OSD process
– Lower latency at all queue-depths, biggest impact to random reads
• Introduces the concept of multiple OSD’s on the same physical
device
– Conceptually similar crush map data placement rules as managing disks
in an enclosure
Multi-partitioned NVMe SSDs
OSD1
OSD2
OSD3
OSD4
QCTCONFIDENTIAL
Multi-partitioned NVMe SSDs
0
2
4
6
8
10
12
0 200,000 400,000 600,000 800,000 1,000,000 1,200,000
AvgLatency(ms)
IOPS
Multiple OSD'sper Device comparison
4K Random Read (Latency vs. IOPS)
5 nodes, 20/40/80 OSDs
1 OSD/NVMe 2 OSD/NVMe 4 OSD/NVMe
0
10
20
30
40
50
60
70
80
90
%CPUUtilization
Single Node CPU Utilization Comparison
4K Random Read @QD32
4/8/16 OSDs
1 OSD/NVMe 2 OSD/NVMe 4 OSD/NVMe
These measurements were doneon a Ceph node based Intel P3700 NVMe SSDs butare equally applicable to other
QCTCONFIDENTIAL
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
9.00
600000 700000 800000 900000 1000000 1100000 1200000 1300000 1400000 1500000 1600000
AverageLatency(ms)
IOPS
4K Random Read (Latency vs.IOPS), IODepth scaling 4-128
5 nodes, 10 clients x 10 RBDvolumes, RedHat Ceph Storage 2.1
Default Tuned
~1.57M IOPS @~4ms
200% improvement in IOPS and Latency
Performance Testing Results
4K 100% Random Read
~1.34M IOPS @~1ms, QD=16
200% improvement in IOPS and Latency
QCTCONFIDENTIAL
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
16.00
18.00
20.00
0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000
AverageLatency(ms)
IOPS
Latency vs IOPS (100% wr, 70/30 rd/rw)
IODepth scaling 4-128K, 5 nodes, 10 clients x 10 RBDvolumes, RedHat Ceph 2.1
100% write 70/30 OLTP mix
Performance Testing Results
4K 100% Random Write, 70/30 OLTP Mix
~450k 70/30 OLTP IOPS
@~1ms, QD=4
~165k Write IOPS
@~2ms, QD=4
QCTCONFIDENTIAL
• NUMA-balance network and storage devices across CPU sockets
• Bind IO devices to local CPU socket (IRQ pinning)
• Align OSD data and Journals to the same NUMA node
• Pin OSD processes to local CPU socket (NUMA node pinning)
NUMA Considerations
CORE CORE
CORE CORE
…
Socket 0
Ceph OSD
CORE CORE
Ceph OSD
CORE
Ceph OSD
CORE
…
Socket 1
Ceph OSD
Memory
Memory
QPI
NUMA Node 0 NUMA Node 1
Storage NICs NICs Storage
Remote
Local
QCTCONFIDENTIAL
QCTCONFIDENTIAL
NUMA-Balanced Config on QCT QuantaGrid D51BP-1U
CPU 0 CPU 1
RAM RAM
4 NVMe drive slots
1 NIC slot
QPI
PCIe Gen3 x4
Ceph OSD 1-8 Ceph OSD 9-16
PCIe Gen3 x8PCIe Gen3 x8 PCIe Gen3 x4
QCT QuantaGrid D51BP-1U
4 NVMe drive slots
1 NIC slot
QCTCONFIDENTIAL
QCTCONFIDENTIAL22
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
200000 250000 300000 350000 400000 450000 500000
AverageLatency(ms)
IOPS
70/30 4k OLTP Performance, Before vs After NUMA balance
IODepth scaling 4-128K, 5 nodes, 10 clients x 10 RBDvolumes, RedHat Ceph 2.1
SW Tuned SW+NUMA CPU Pinning
40% better IOPS,
100% betterlatency at QD=8
with NUMA balance
At QD=8
100% betteraveragelatency
15-20% better 90th pctlatency
10-15% better 99th pctlatency
Performance Testing Results
Latency improvements after NUMA optimizations
QCTCONFIDENTIAL
§ All-NVMe Ceph enables high performance workloads
§ NUMA balanced architecture
§ Small footprint (1U), lower overall TCO
§ Million IOPS with very low latency
QCTCONFIDENTIAL
Visit www.QCT.io for QxStor Red Hat Ceph Storage Edition:
• Reference Architecture Red Hat Ceph Storage on QCT Servers
• Datasheet QxStor Red Hat Ceph Storage
• Solution Brief QCT and Intel Hadoop Over Ceph Architecture
• Solution Brief Deploying Red Hat Ceph Storage on QCT servers
• Solution Brief Containerized Ceph for On-Demand, Hyperscale Storage
For Other Information…
QCTCONFIDENTIAL
Appendix
# Please do not change this file directly since it is managed by Ansible and will be overwritten
[global]
fsid = 7e191449-3592-4ec3-b42b-e2c4d01c0104
max open files = 131072
crushtool = /usr/bin/crushtool
debug_lockdep = 0/1
debug_context = 0/1
debug_crush = 1/1
debug_buffer = 0/1
debug_timer = 0/0
debug_filer = 0/1
debug_objecter = 0/1
debug_rados = 0/5
debug_rbd = 0/5
debug_ms = 0/5
debug_monc = 0/5
debug_tp = 0/5
debug_auth = 1/5
debug_finisher = 1/5
debug_heartbeatmap = 1/5
debug_perfcounter = 1/5
debug_rgw = 1/5
debug_asok = 1/5
debug_throttle = 1/1
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_mon = 0/0
debug_paxos = 0/0
osd_crush_chooseleaf_type = 0
filestore_xattr_use_omap = true
osd_pool_default_size = 1
osd_pool_default_min_size = 1
Configuration Detail – ceph.conf (1/2)
rbd_cache = true
mon_compact_on_trim = false
log_to_syslog = false
log_file = /var/log/ceph/$name.log
mutex_perf_counter = true
throttler_perf_counter = false
ms_nocrc = true
[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by
QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by
SELinux or AppArmor
rbd_cache = true
rbd_cache_writethrough_until_flush = false
[mon]
[mon.qct50]
host = qct50
# we need to check if monitor_interface is defined in the inventory per host or if it's set in a
group_vars file
mon addr = 10.5.15.50
mon_max_pool_pg_num = 166496
mon_osd_max_split_count = 10000
[osd]
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd_mount_options_xfs = "rw,noatime,inode64,logbsize=256k,delaylog"
osd journal size = 10240
cluster_network = 10.5.16.0/24
public_network = 10.5.15.0/24
filestore_queue_max_ops = 5000
osd_client_message_size_cap = 0
objecter_infilght_op_bytes = 1048576000
ms_dispatch_throttle_bytes = 1048576000
filestore_wbthrottle_enable = True
filestore_fd_cache_shards = 64
objecter_inflight_ops = 1024000
filestore_max_sync_interval = 10
filestore_op_threads = 16
osd_pg_object_context_cache_count = 10240
journal_queue_max_ops = 3000
filestore_odsync_write = True
journal_queue_max_bytes = 10485760000
journal_max_write_entries = 1000
filestore_queue_committing_max_ops = 5000
journal_max_write_bytes = 1048576000
filestore_fd_cache_size = 10240
osd_client_message_cap = 0
journal_dynamic_throttle = True
osd_enable_op_tracker = False
Configuration Detail – ceph.conf (2/2)
cluster:
head: "root@qct50"
clients: ["root@qct50", "root@qct51", "root@qct52", "root@qct53", "root@qct54",
"root@qct55", "root@qct56", "root@qct57", "root@qct58", "root@qct59"]
osds: ["root@qct62", "root@qct63", "root@qct64", "root@qct65", "root@qct66"]
mons: ["root@qct50"]
osds_per_node: 16
fs: xfs
mkfs_opts: -f -i size=2048 -n size=64k
mount_opts: -o inode64,noatime,logbsize=256k
conf_file: /etc/ceph/ceph.conf
ceph.conf: /etc/ceph/ceph.conf
iterations: 1
rebuild_every_test: False
tmp_dir: "/tmp/cbt"
clusterid: 7e191449-3592-4ec3-b42b-e2c4d01c0104
use_existing: True
pool_profiles:
replicated:
pg_size: 8192
pgp_size: 8192
replication: 2
benchmarks:
librbdfio:
rbdadd_mons: "root@qct50:6789"
rbdadd_options: "noshare"
time: 300
ramp: 100
vol_size: 8192
mode: ['randread']
numjobs: 1
use_existing_volumes: False
procs_per_volume: [1]
volumes_per_client: [10]
op_size: [4096]
concurrent_procs: [1]
iodepth: [4, 8, 16, 32, 64, 128]
osd_ra: [128]
norandommap: True
cmd_path: '/root/cbt_packages/fio/fio'
log_avg_msec: 250
pool_profile: 'replicated'
Configuration Detail - CBT YAML File
QCT CONFIDENTIAL
www.QCT.io
Looking for
innovative cloud solution?
Come to QCT, who else?
Ad

More Related Content

What's hot (20)

Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
OpenStack Korea Community
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
Ceph Community
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
ShapeBlue
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
Karan Singh
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
Mirantis
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a Nutshell
Karan Singh
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Community
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
Red_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
Karan Singh
 
Ceph c01
Ceph c01Ceph c01
Ceph c01
Lâm Đào
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
Mahmoud Shiri Varamini
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
ScyllaDB
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
ScyllaDB
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
Ceph Community
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
Ceph Community
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
OpenStack Korea Community
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
Ceph Community
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
ShapeBlue
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
Karan Singh
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
Mirantis
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a Nutshell
Karan Singh
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Community
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
Red_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
Karan Singh
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
ScyllaDB
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
ScyllaDB
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
Ceph Community
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
Ceph Community
 

Similar to Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture (20)

Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Ceph Community
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red_Hat_Storage
 
CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016]
IO Visor Project
 
Azure VM 101 - HomeGen by CloudGen Verona - Marco Obinu
Azure VM 101 - HomeGen by CloudGen Verona - Marco ObinuAzure VM 101 - HomeGen by CloudGen Verona - Marco Obinu
Azure VM 101 - HomeGen by CloudGen Verona - Marco Obinu
Marco Obinu
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
Paris Open Source Summit
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
Red_Hat_Storage
 
Introduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AIIntroduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AI
Tyrone Systems
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
Aaron Joue
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red_Hat_Storage
 
Datasheet - Pivot3 - HCI Family
Datasheet - Pivot3 - HCI FamilyDatasheet - Pivot3 - HCI Family
Datasheet - Pivot3 - HCI Family
Grant Aitken
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red_Hat_Storage
 
April 2014 IBM announcement webcast
April 2014 IBM announcement webcastApril 2014 IBM announcement webcast
April 2014 IBM announcement webcast
HELP400
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Danielle Womboldt
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
hellobank1
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Ceph Community
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red_Hat_Storage
 
CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016]
IO Visor Project
 
Azure VM 101 - HomeGen by CloudGen Verona - Marco Obinu
Azure VM 101 - HomeGen by CloudGen Verona - Marco ObinuAzure VM 101 - HomeGen by CloudGen Verona - Marco Obinu
Azure VM 101 - HomeGen by CloudGen Verona - Marco Obinu
Marco Obinu
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
Paris Open Source Summit
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
Red_Hat_Storage
 
Introduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AIIntroduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AI
Tyrone Systems
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
Aaron Joue
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red_Hat_Storage
 
Datasheet - Pivot3 - HCI Family
Datasheet - Pivot3 - HCI FamilyDatasheet - Pivot3 - HCI Family
Datasheet - Pivot3 - HCI Family
Grant Aitken
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red_Hat_Storage
 
April 2014 IBM announcement webcast
April 2014 IBM announcement webcastApril 2014 IBM announcement webcast
April 2014 IBM announcement webcast
HELP400
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Danielle Womboldt
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
hellobank1
 
Ad

More from Danielle Womboldt (9)

Ceph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateCeph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community Update
Danielle Womboldt
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Danielle Womboldt
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph Day
Danielle Womboldt
 
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileCeph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Danielle Womboldt
 
Ceph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsCeph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and Optimizations
Danielle Womboldt
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Danielle Womboldt
 
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Danielle Womboldt
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
Danielle Womboldt
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
Danielle Womboldt
 
Ceph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateCeph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community Update
Danielle Womboldt
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Danielle Womboldt
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph Day
Danielle Womboldt
 
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileCeph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Danielle Womboldt
 
Ceph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsCeph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and Optimizations
Danielle Womboldt
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Danielle Womboldt
 
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Danielle Womboldt
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
Danielle Womboldt
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
Danielle Womboldt
 
Ad

Recently uploaded (20)

HCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser EnvironmentsHCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser Environments
panagenda
 
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptx
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxDevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptx
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptx
Justin Reock
 
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxSpecial Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
shyamraj55
 
How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?
Daniel Lehner
 
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Aqusag Technologies
 
Greenhouse_Monitoring_Presentation.pptx.
Greenhouse_Monitoring_Presentation.pptx.Greenhouse_Monitoring_Presentation.pptx.
Greenhouse_Monitoring_Presentation.pptx.
hpbmnnxrvb
 
Big Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur MorganBig Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur Morgan
Arthur Morgan
 
tecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdftecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdf
fjgm517
 
AI and Data Privacy in 2025: Global Trends
AI and Data Privacy in 2025: Global TrendsAI and Data Privacy in 2025: Global Trends
AI and Data Privacy in 2025: Global Trends
InData Labs
 
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...
organizerofv
 
Cybersecurity Identity and Access Solutions using Azure AD
Cybersecurity Identity and Access Solutions using Azure ADCybersecurity Identity and Access Solutions using Azure AD
Cybersecurity Identity and Access Solutions using Azure AD
VICTOR MAESTRE RAMIREZ
 
Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025
Splunk
 
Technology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data AnalyticsTechnology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data Analytics
InData Labs
 
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell
 
Build Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For DevsBuild Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For Devs
Brian McKeiver
 
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
BookNet Canada
 
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-UmgebungenHCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
panagenda
 
Generative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in BusinessGenerative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in Business
Dr. Tathagat Varma
 
Rusty Waters: Elevating Lakehouses Beyond Spark
Rusty Waters: Elevating Lakehouses Beyond SparkRusty Waters: Elevating Lakehouses Beyond Spark
Rusty Waters: Elevating Lakehouses Beyond Spark
carlyakerly1
 
ThousandEyes Partner Innovation Updates for May 2025
ThousandEyes Partner Innovation Updates for May 2025ThousandEyes Partner Innovation Updates for May 2025
ThousandEyes Partner Innovation Updates for May 2025
ThousandEyes
 
HCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser EnvironmentsHCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser Environments
panagenda
 
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptx
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxDevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptx
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptx
Justin Reock
 
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxSpecial Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
shyamraj55
 
How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?
Daniel Lehner
 
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Aqusag Technologies
 
Greenhouse_Monitoring_Presentation.pptx.
Greenhouse_Monitoring_Presentation.pptx.Greenhouse_Monitoring_Presentation.pptx.
Greenhouse_Monitoring_Presentation.pptx.
hpbmnnxrvb
 
Big Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur MorganBig Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur Morgan
Arthur Morgan
 
tecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdftecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdf
fjgm517
 
AI and Data Privacy in 2025: Global Trends
AI and Data Privacy in 2025: Global TrendsAI and Data Privacy in 2025: Global Trends
AI and Data Privacy in 2025: Global Trends
InData Labs
 
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...
organizerofv
 
Cybersecurity Identity and Access Solutions using Azure AD
Cybersecurity Identity and Access Solutions using Azure ADCybersecurity Identity and Access Solutions using Azure AD
Cybersecurity Identity and Access Solutions using Azure AD
VICTOR MAESTRE RAMIREZ
 
Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025
Splunk
 
Technology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data AnalyticsTechnology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data Analytics
InData Labs
 
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell
 
Build Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For DevsBuild Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For Devs
Brian McKeiver
 
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
BookNet Canada
 
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-UmgebungenHCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
panagenda
 
Generative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in BusinessGenerative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in Business
Dr. Tathagat Varma
 
Rusty Waters: Elevating Lakehouses Beyond Spark
Rusty Waters: Elevating Lakehouses Beyond SparkRusty Waters: Elevating Lakehouses Beyond Spark
Rusty Waters: Elevating Lakehouses Beyond Spark
carlyakerly1
 
ThousandEyes Partner Innovation Updates for May 2025
ThousandEyes Partner Innovation Updates for May 2025ThousandEyes Partner Innovation Updates for May 2025
ThousandEyes Partner Innovation Updates for May 2025
ThousandEyes
 

Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture

  • 1. Ceph All-Flash Array Design Based on NUMA Architecture QCT (QuantaCloud Technology) Marco Huang, Technical Manager Becky Lin, ProgramManager
  • 2. • All-flash Ceph and Use Cases • QCT QxStor All-flash Ceph for IOPS • QCT Lab Environment Overview & Detailed Architecture • Importance of NUMA and Proof Points Agenda 2 QCTCONFIDENTIAL
  • 3. QCT Powers Most of Cloud Services Global Tier 1 Hyperscale Datacenters, Telcos and Enterprises 3 QCTCONFIDENTIAL • QCT (Quanta Cloud Technology) was a subsidiary of Quanta Computer • Quanta Computer is a fortune global 500 company with over $32B revenue
  • 4. Why All-flash Storage? • Falling flash prices: Flashprices fell as much as 75% over the 18 months leading up to mid-2016 and the trend continues. “TechRepublic: 10 storage trends to watch in 2016” • Flashis 10x cheaper than DRAM: withpersistence and highcapacity “NetApp” • Flashis 100x cheaper than disk: pennies per IOPS vs. dollars per IOPS “NetApp” • Flashis 1000x fasterthan disk: latency drops from milliseconds to microseconds “NetApp” €$¥ • Flashperformance advantage: HDDs have anadvantage in $/GB, while flashhas anadvantage in $/IOPS. “TechTarget: Hybrid storage arrays vs. all- flash arrays: A little flash or a lot?” • NVMe-based storage trend: 60% of enterprise storage appliances will have NVMe bays by 2020 “G2M Research” Requiresub-millisecond latencyNeed performance-optimized storageformission-critical apps Flash capacity gains while the price drops QCTCONFIDENTIAL
  • 5. All-flash Ceph Use Cases 5 QCTCONFIDENTIAL
  • 6. QCT QxStor Red Hat Ceph Storage Edition Optimized for workloads ThroughputOptimized • Densest 1U Ceph building block • Smaller failure domain • 3x SSDS3710 journal • 12x HDD 7.2krpm • Obtain best throughput & density at once • Scale at high scale 700TB • 2x 2x NVMe P3700 journal • 2x 35xHDD • Block or Object Storage, Video, Audio, Image, Streaming media, Big Data • 3x replication USECASE QxStor RCT-400QxStor RCT-200 Cost/Capacity Optimized QxStor RCC-400 • Maximize storage capacity • Highest density 560TB* raw capacity per chassis • 2x 35xHDD • Object storage, Archive, Backup, Enterprise Dropbox • Erasure coding D51PH-1ULH T21P-4U * Optional model, oneMB per chassis, can support 620TB raw capacity IOPS Optimized QxStor RCI-300 • All FlashDesign • Lowest latency • 4x P3520 2TB or 4x P3700 1.6TB • Database, HPC, Mission Critical Applications • 2x replication D51BP-1U 6 T21P-4U
  • 7. 7 QCT QxStor RCI-300 All Flash Design Ceph for I/O Intensive Workloads SKU1: All flash Ceph - theBest IOPS SKU • Ceph Storage Server:D51BP-1U • CPU: 2x E5-2995 v4 or plus • RAM: 128GB • NVMe SSD:4x P3700 1.6TB • NIC: 10GbE dual port or 40GbE dual port SKU2: All flash Ceph - IOPS/Capacity Balanced SKU (best TCO, as of today) • Ceph Storage Server:D51BP-1U • CPU: 2x E5-2980 v4 or higher cores • RAM: 128GB • NVMe SSD:4x P3520 2TB • NIC: 10GbE dual port or 40GbE dual port NUMA Balanced Ceph Hardware Highest IOPS & Lowest Latency Optimized Ceph & HW Integration for IOPS intensive workloads QCTCONFIDENTIAL
  • 8. 8 NVMe: Best-in-Class IOPS, Lower/Consistent Latency Lowest Latency of Standard Storage Interfaces 0 500000 100% Read 70% Read 0% Read IOPS IOPS - 4K RandomWorkloads PCIe/NVMe SAS 12Gb/s 3x better IOPS vs SAS 12Gbps For the same #CPU cycles, NVMe delivers over 2X the IOPs of SAS! Gen1 NVMe has 2 to 3x better Latency Consistency vs SAS Test andSystem Configurations: PCI Express* (PCIe* )/NVM Express* (NVMe) Measurements made onIntel® Core™ i7-3770S system @3.1GHzand 4GBMem running Windows* Server 2012 Standard O/S, IntelPCIe/NVMe SSDs,data collected by IOmeter* tool. SAS Measurements from HGST Ultrastar* SSD800M/1000M(SAS), SATA S3700 Series. For more complete information about performance andbenchmark results, visit http://www.intel.com/performance. Source: Intel Internal Testing.
  • 9. QCTCONFIDENTIAL RADOS LIBRADOS QCT Lab Environment Overview Ceph 1 …. Monitors Storage Clusters Interfaces RBD (Block Storage) Cluster Network 10GbE Public Network 10GbE Clients Client 1 Client 2 Client 9 Client 10 Ceph 2 Ceph 5 …. QCTCONFIDENTIAL
  • 10. 5-Node all-NVMe Ceph Cluster Dual-Xeon E5 [email protected], 88 HT, 128GB DDR4 RHEL 7.3, 3.10, Red Hat Ceph 2.1 10x Client Systems Dual-Xeon E5 [email protected] 88 HT, 128 GB DDR4 CephOSD1 CephOSD2 CephOSD3 CephOSD4 CephOSD16 … NVMe3 NVMe2 NVMe4 NVMe1 20x 2TB P3520 SSDs 80 OSDs 2x Replication 19TB EffectiveCapacity Tests at cluster fill-level 82% CephRBDClient Docker3 Sysbench Client Docker4 Sysbench Client Docker2 (krbd) Percona DB Server Docker1 (krbd) Percona DB Server ClusterNW10GbE Sysbench Containers 16 vCPUs, 32GB RAM FIO 2.8, Sysbench0.5 DB Containers 16 vCPUs, 32GB RAM, 200GB RBD volume, 100GB MySQL dataset InnoDB bufcache25GB(25%) Public NW 10 GbE QuantaGrid D51BP-1U QCTCONFIDENTIAL Detailed System Architecture in QCT Lab QCTCONFIDENTIAL
  • 11. QCTCONFIDENTIAL Stage Test Subject Benchmark tools Major Task I/O Baseline Raw Disk FIO Determinemaximum server IO backplanebandwidth Network Baseline NIC iPerf Ensure consistent network bandwidth between all nodes Bare Metal RBD Baseline LibRBD FIO CBT Use FIO RBD engine to test performance using libRBD Docker Container OLTP Baseline Percona DB + Sysbench Sysbench/OLTP Establish number ofworkload- driver VMs desired per client Benchmark criteria: 1. Default: ceph.conf 2. Software Level Tuning: ceph.conftuned 3. Software + NUMA CPU Pinning: ceph.conftuned +NUMACPU Pinning Benchmark Methodology QCTCONFIDENTIAL QCTCONFIDENTIAL
  • 12. QCTCONFIDENTIAL • Use faster media for journals, metadata • Use recent Linux kernels – blk-mq support packs big performance gains with NVMe media – optimizations for non-rotational media • Use tuned where available – adaptive latency performance tuning [2] • Virtual memory, network and storage tweaks – use commonly recommended VM, network settings [1-4] – enable rq_affinity, read ahead for NVMe devices • BIOS and CPU performance governor settings – disable C-states and enable Turbo-boost – use “performance” CPU governor Configuring All-flash Ceph System Tuning for Low-latencyWorkloads [1] https://wiki.mikejung.biz/Ubuntu_Performance_Tuning [2] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/tuned-adm.html [3] http://www.brendangregg.com/blog/2015-03-03/performance-tuning-linux-instances-on-ec2.html [4] https://www.suse.com/documentation/ses-4/singlehtml/book_storage_admin/book_storage_admin.html
  • 13. Parameter Default value Tuned value objecter_inflight_ops 1024 102400 Objecter is responsible for sending requests to OSD. objecter_inflight_ops_byte s 104857600 1048576000 Objecter_inflight_ops/objecter_inflight_op_bytes tell objecter to throttle outgoing ops according to budget (values based on experiments in the Dumpling timeframe) ms_dispatch_throttle_byte s 104857600 1048576000 ms_dispatch_throttle_bytes throttle is to dispatch message size for simple messenger (values based on experiments in the Dumpling timeframe) filestore_queue_max_ops 50 5000 filestore_queue_max_ops/filestore_queue_max_bytes throttle are used to throttle inflight ops for filestore filestore_queue_max_bytes 104857600 1048576000 These throttles are checked before sending ops to journal, so if filestore does not get enough budget for current op, OSD op thread will be blocked Configuring All-flash Ceph Ceph Tunables QCTCONFIDENTIAL
  • 14. 14 Parameter Default value Tuned value filestore_max_sync_interv al 5 10 filestore_max_sync_interval controls the interval (in seconds) that sync thread flush data from memory to disk. Use page cache - by default filestore writes data to memory and sync thread is responsible for flushing data to disk, then journal entries can be trimmed. Note that large filestore_max_sync_interval can cause performance spike filestore_op_threads 2 6 filestore_op_threads controls the number of filesystem operation threads that execute in parallel. If the storage backend is fast enough and has enough queues to support parallel operations, it’s recommended to increase this parameter, given there is enough CPU available osd_op_threads 2 32 osd_op_threads controls the number of threads to service Ceph OSD Daemon operations. Setting this to 0 will disable multi-threading. Increasing this number may increase the request processing rate If the storage backend is fast enough and has enough queues to support parallel operations, it’s recommended to increase this parameter, given there is enough CPU available Configuring All-flash Ceph Ceph Tunables QCTCONFIDENTIAL
  • 15. Parameter Default value Tuned value journal_queue_max_ops 300 3000 journal_queue_max_bytes/journal_queue_max_op throttles are to throttleinflight ops for journal If journal does not get enough budget for current op, it will block OSD op thread journal_queue_max_bytes 33554432 1048576000 journal_max_write_entries 100 1000 journal_max_write_entries/journal_max_write_bytes throttles areused to throttleops or bytes for every journal write Tweaking these two parameters maybe helpful for small writes journal_max_write_bytes 10485760 1048576000 Configuring All-flash Ceph Ceph Tunables QCTCONFIDENTIAL
  • 16. • Leverage latest Intel NVMe technology to reach high performance, bigger capacity, with lower $/GB – Intel DC P3520 2TB raw performance: 375Kread IOPS, 26K write IOPS • By using multiple OSD partitions, Ceph performance scales linearly – Reduces lock contention within a single OSD process – Lower latency at all queue-depths, biggest impact to random reads • Introduces the concept of multiple OSD’s on the same physical device – Conceptually similar crush map data placement rules as managing disks in an enclosure Multi-partitioned NVMe SSDs OSD1 OSD2 OSD3 OSD4 QCTCONFIDENTIAL
  • 17. Multi-partitioned NVMe SSDs 0 2 4 6 8 10 12 0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 AvgLatency(ms) IOPS Multiple OSD'sper Device comparison 4K Random Read (Latency vs. IOPS) 5 nodes, 20/40/80 OSDs 1 OSD/NVMe 2 OSD/NVMe 4 OSD/NVMe 0 10 20 30 40 50 60 70 80 90 %CPUUtilization Single Node CPU Utilization Comparison 4K Random Read @QD32 4/8/16 OSDs 1 OSD/NVMe 2 OSD/NVMe 4 OSD/NVMe These measurements were doneon a Ceph node based Intel P3700 NVMe SSDs butare equally applicable to other QCTCONFIDENTIAL
  • 18. 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 600000 700000 800000 900000 1000000 1100000 1200000 1300000 1400000 1500000 1600000 AverageLatency(ms) IOPS 4K Random Read (Latency vs.IOPS), IODepth scaling 4-128 5 nodes, 10 clients x 10 RBDvolumes, RedHat Ceph Storage 2.1 Default Tuned ~1.57M IOPS @~4ms 200% improvement in IOPS and Latency Performance Testing Results 4K 100% Random Read ~1.34M IOPS @~1ms, QD=16 200% improvement in IOPS and Latency QCTCONFIDENTIAL
  • 19. 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 18.00 20.00 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000 AverageLatency(ms) IOPS Latency vs IOPS (100% wr, 70/30 rd/rw) IODepth scaling 4-128K, 5 nodes, 10 clients x 10 RBDvolumes, RedHat Ceph 2.1 100% write 70/30 OLTP mix Performance Testing Results 4K 100% Random Write, 70/30 OLTP Mix ~450k 70/30 OLTP IOPS @~1ms, QD=4 ~165k Write IOPS @~2ms, QD=4 QCTCONFIDENTIAL
  • 20. • NUMA-balance network and storage devices across CPU sockets • Bind IO devices to local CPU socket (IRQ pinning) • Align OSD data and Journals to the same NUMA node • Pin OSD processes to local CPU socket (NUMA node pinning) NUMA Considerations CORE CORE CORE CORE … Socket 0 Ceph OSD CORE CORE Ceph OSD CORE Ceph OSD CORE … Socket 1 Ceph OSD Memory Memory QPI NUMA Node 0 NUMA Node 1 Storage NICs NICs Storage Remote Local QCTCONFIDENTIAL
  • 21. QCTCONFIDENTIAL NUMA-Balanced Config on QCT QuantaGrid D51BP-1U CPU 0 CPU 1 RAM RAM 4 NVMe drive slots 1 NIC slot QPI PCIe Gen3 x4 Ceph OSD 1-8 Ceph OSD 9-16 PCIe Gen3 x8PCIe Gen3 x8 PCIe Gen3 x4 QCT QuantaGrid D51BP-1U 4 NVMe drive slots 1 NIC slot QCTCONFIDENTIAL
  • 22. QCTCONFIDENTIAL22 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 200000 250000 300000 350000 400000 450000 500000 AverageLatency(ms) IOPS 70/30 4k OLTP Performance, Before vs After NUMA balance IODepth scaling 4-128K, 5 nodes, 10 clients x 10 RBDvolumes, RedHat Ceph 2.1 SW Tuned SW+NUMA CPU Pinning 40% better IOPS, 100% betterlatency at QD=8 with NUMA balance At QD=8 100% betteraveragelatency 15-20% better 90th pctlatency 10-15% better 99th pctlatency Performance Testing Results Latency improvements after NUMA optimizations QCTCONFIDENTIAL
  • 23. § All-NVMe Ceph enables high performance workloads § NUMA balanced architecture § Small footprint (1U), lower overall TCO § Million IOPS with very low latency QCTCONFIDENTIAL
  • 24. Visit www.QCT.io for QxStor Red Hat Ceph Storage Edition: • Reference Architecture Red Hat Ceph Storage on QCT Servers • Datasheet QxStor Red Hat Ceph Storage • Solution Brief QCT and Intel Hadoop Over Ceph Architecture • Solution Brief Deploying Red Hat Ceph Storage on QCT servers • Solution Brief Containerized Ceph for On-Demand, Hyperscale Storage For Other Information… QCTCONFIDENTIAL
  • 26. # Please do not change this file directly since it is managed by Ansible and will be overwritten [global] fsid = 7e191449-3592-4ec3-b42b-e2c4d01c0104 max open files = 131072 crushtool = /usr/bin/crushtool debug_lockdep = 0/1 debug_context = 0/1 debug_crush = 1/1 debug_buffer = 0/1 debug_timer = 0/0 debug_filer = 0/1 debug_objecter = 0/1 debug_rados = 0/5 debug_rbd = 0/5 debug_ms = 0/5 debug_monc = 0/5 debug_tp = 0/5 debug_auth = 1/5 debug_finisher = 1/5 debug_heartbeatmap = 1/5 debug_perfcounter = 1/5 debug_rgw = 1/5 debug_asok = 1/5 debug_throttle = 1/1 debug_journaler = 0/0 debug_objectcatcher = 0/0 debug_client = 0/0 debug_osd = 0/0 debug_optracker = 0/0 debug_objclass = 0/0 debug_filestore = 0/0 debug_journal = 0/0 debug_mon = 0/0 debug_paxos = 0/0 osd_crush_chooseleaf_type = 0 filestore_xattr_use_omap = true osd_pool_default_size = 1 osd_pool_default_min_size = 1 Configuration Detail – ceph.conf (1/2)
  • 27. rbd_cache = true mon_compact_on_trim = false log_to_syslog = false log_file = /var/log/ceph/$name.log mutex_perf_counter = true throttler_perf_counter = false ms_nocrc = true [client] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor rbd_cache = true rbd_cache_writethrough_until_flush = false [mon] [mon.qct50] host = qct50 # we need to check if monitor_interface is defined in the inventory per host or if it's set in a group_vars file mon addr = 10.5.15.50 mon_max_pool_pg_num = 166496 mon_osd_max_split_count = 10000 [osd] osd mkfs type = xfs osd mkfs options xfs = -f -i size=2048 osd_mount_options_xfs = "rw,noatime,inode64,logbsize=256k,delaylog" osd journal size = 10240 cluster_network = 10.5.16.0/24 public_network = 10.5.15.0/24 filestore_queue_max_ops = 5000 osd_client_message_size_cap = 0 objecter_infilght_op_bytes = 1048576000 ms_dispatch_throttle_bytes = 1048576000 filestore_wbthrottle_enable = True filestore_fd_cache_shards = 64 objecter_inflight_ops = 1024000 filestore_max_sync_interval = 10 filestore_op_threads = 16 osd_pg_object_context_cache_count = 10240 journal_queue_max_ops = 3000 filestore_odsync_write = True journal_queue_max_bytes = 10485760000 journal_max_write_entries = 1000 filestore_queue_committing_max_ops = 5000 journal_max_write_bytes = 1048576000 filestore_fd_cache_size = 10240 osd_client_message_cap = 0 journal_dynamic_throttle = True osd_enable_op_tracker = False Configuration Detail – ceph.conf (2/2)
  • 28. cluster: head: "root@qct50" clients: ["root@qct50", "root@qct51", "root@qct52", "root@qct53", "root@qct54", "root@qct55", "root@qct56", "root@qct57", "root@qct58", "root@qct59"] osds: ["root@qct62", "root@qct63", "root@qct64", "root@qct65", "root@qct66"] mons: ["root@qct50"] osds_per_node: 16 fs: xfs mkfs_opts: -f -i size=2048 -n size=64k mount_opts: -o inode64,noatime,logbsize=256k conf_file: /etc/ceph/ceph.conf ceph.conf: /etc/ceph/ceph.conf iterations: 1 rebuild_every_test: False tmp_dir: "/tmp/cbt" clusterid: 7e191449-3592-4ec3-b42b-e2c4d01c0104 use_existing: True pool_profiles: replicated: pg_size: 8192 pgp_size: 8192 replication: 2 benchmarks: librbdfio: rbdadd_mons: "root@qct50:6789" rbdadd_options: "noshare" time: 300 ramp: 100 vol_size: 8192 mode: ['randread'] numjobs: 1 use_existing_volumes: False procs_per_volume: [1] volumes_per_client: [10] op_size: [4096] concurrent_procs: [1] iodepth: [4, 8, 16, 32, 64, 128] osd_ra: [128] norandommap: True cmd_path: '/root/cbt_packages/fio/fio' log_avg_msec: 250 pool_profile: 'replicated' Configuration Detail - CBT YAML File
  • 29. QCT CONFIDENTIAL www.QCT.io Looking for innovative cloud solution? Come to QCT, who else?