SlideShare a Scribd company logo
Using Ceph with 
OpenNebula 
John Spray 
john.spray@redhat.com
Agenda 
● What is it? 
● Architecture 
● Integration with OpenNebula 
● What's new? 
2 OpenNebulaConf 2014 Berlin
What is Ceph? 
3 OpenNebulaConf 2014 Berlin
What is Ceph? 
● Highly available resilient data store 
● Free Software (LGPL) 
● 10 years since inception 
● Flexible object, block and filesystem interfaces 
● Especially popular in private clouds as VM image 
service, and S3-compatible object storage service. 
4 OpenNebulaConf 2014 Berlin
Interfaces to storage 
S3 & Swift 
Multi-tenant 
Snapshots 
Clones 
5 OpenNebulaConf 2014 Berlin 
FILE 
SYSTEM 
CephFS 
BLOCK 
STORAGE 
RBD 
OBJECT 
STORAGE 
RGW 
Keystone 
Geo-Replication 
Native API 
OpenStack 
Linux Kernel 
iSCSI 
POSIX 
Linux Kernel 
CIFS/NFS 
HDFS 
Distributed Metadata
Ceph Architecture 
6 OpenNebulaConf 2014 Berlin
Architectural Components 
APP HOST/VM CLIENT 
RGW 
A web services 
gateway for object 
storage, compatible 
with S3 and Swift 
RBD 
A reliable, fully-distributed 
block 
device with cloud 
platform integration 
LIBRADOS 
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors 
7 OpenNebulaConf 2014 Berlin 
CEPHFS 
A distributed file 
system with POSIX 
semantics and scale-out 
metadata 
management
Object Storage Daemons 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
btrfs 
xfs 
ext4 
8 OpenNebulaConf 2014 Berlin 
M 
M 
M
RADOS Components 
OSDs: 
 10s to 10000s in a cluster 
 One per disk (or one per SSD, RAID group…) 
 Serve stored objects to clients 
 Intelligently peer for replication & recovery 
Monitors: 
 Maintain cluster membership and state 
 Provide consensus for distributed decision-making 
 Small, odd number 
 These do not serve stored objects to clients 
M 
9 OpenNebulaConf 2014 Berlin
Rados Cluster 
APPLICATION 
M M 
M M 
M 
RADOS CLUSTER 
10 OpenNebulaConf 2014 Berlin
Where do objects live? 
?? 
APPLICATION 
11 OpenNebulaConf 2014 Berlin 
M 
M 
M 
OBJECT
A Metadata Server? 
1 
APPLICATION 
12 OpenNebulaConf 2014 Berlin 
M 
M 
M 
2
Calculated placement 
APPLICATION F 
13 OpenNebulaConf 2014 Berlin 
M 
M 
M 
A-G 
H-N 
O-T 
U-Z
Even better: CRUSH 
14 OpenNebulaConf 2014 Berlin 
01 11 
11 
01 
RADOS CLUSTER 
OBJECT 
10 
01 
01 
10 
10 
01 
11 
01 
10 
01 
01 
10 
10 
01 
01 10 
10 10 01 01
CRUSH is a quick calculation 
15 OpenNebulaConf 2014 Berlin 
01 11 
11 
01 
RADOS CLUSTER 
OBJECT 
10 
01 
01 
10 
10 
01 
01 10 
10 10 01 01
CRUSH: Dynamic data placement 
CRUSH: 
 Pseudo-random placement algorithm 
 Fast calculation, no lookup 
 Repeatable, deterministic 
 Statistically uniform distribution 
 Stable mapping 
 Limited data migration on change 
 Rule-based configuration 
 Infrastructure topology aware 
 Adjustable replication 
 Weighting 
16 OpenNebulaConf 2014 Berlin
Architectural Components 
APP HOST/VM CLIENT 
RGW 
A web services 
gateway for object 
storage, compatible 
with S3 and Swift 
RBD 
A reliable, fully-distributed 
block 
device with cloud 
platform integration 
LIBRADOS 
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors 
17 OpenNebulaConf 2014 Berlin 
CEPHFS 
A distributed file 
system with POSIX 
semantics and scale-out 
metadata 
management
RBD: Virtual disks in Ceph 
18 OpenNebulaConf 2014 Berlin 
18 
RADOS BLOCK DEVICE: 
 Storage of disk images in RADOS 
 Decouples VMs from host 
 Images are striped across the cluster (pool) 
 Snapshots 
 Copy-on-write clones 
 Support in: 
 Mainline Linux Kernel (2.6.39+) 
 Qemu/KVM 
 OpenStack, CloudStack, OpenNebula, 
Proxmox
Storing virtual disks 
VM 
HYPERVISOR 
LIBRBD 
M M 
RADOS CLUSTER 
19 
19 OpenNebulaConf 2014 Berlin
Using Ceph with OpenNebula 
20 OpenNebulaConf 2014 Berlin
Storage in OpenNebula deployments 
OpenNebula Cloud Architecture Survey 2014 (http://c12g.com/resources/survey/) 
21 OpenNebulaConf 2014 Berlin
RBD and libvirt/qemu 
● librbd (user space) client integration with libvirt/qemu 
● Support for live migration, thin clones 
● Get recent versions! 
● Directly supported in OpenNebula since 4.0 with the 
Ceph Datastore (wraps `rbd` CLI) 
More info online: 
http://ceph.com/docs/master/rbd/libvirt/ 
http://docs.opennebula.org/4.10/administration/storage/ceph_ds.html 
22 OpenNebulaConf 2014 Berlin
Other hypervisors 
● OpenNebula is flexible, so can we also use Ceph with 
non-libvirt/qemu hypervisors? 
● Kernel RBD: can present RBD images in /dev/ on 
hypervisor host for software unaware of librbd 
● Docker: can exploit RBD volumes with a local 
filesystem for use as data volumes – maybe CephFS 
in future...? 
● For unsupported hypervisors, can adapt to Ceph using 
e.g. iSCSI for RBD, or NFS for CephFS (but test re-exports 
carefully!) 
23 OpenNebulaConf 2014 Berlin
Choosing hardware 
Testing/benchmarking/expert advice is needed, but there 
are general guidelines: 
● Prefer many cheap nodes to few expensive nodes (10 
is better than 3) 
● Include small but fast SSDs for OSD journals 
● Don't simply buy biggest drives: consider 
IOPs/capacity ratio 
● Provision network and IO capacity sufficient for your 
workload plus recovery bandwidth from node failure. 
24 OpenNebulaConf 2014 Berlin
What's new? 
25 OpenNebulaConf 2014 Berlin
Ceph releases 
● Ceph 0.80 firefly (May 2014) 
– Cache tiering & erasure coding 
– Key/val OSD backends 
– OSD primary affinity 
● Ceph 0.87 giant (October 2014) 
– RBD cache enabled by default 
– Performance improvements 
– Locally recoverable erasure codes 
● Ceph x.xx hammer (2015) 
26 OpenNebulaConf 2014 Berlin
Additional components 
● Ceph FS – scale-out POSIX filesystem service, 
currently being stabilized 
● Calamari – monitoring dashboard for Ceph 
● ceph-deploy – easy SSH-based deployment tool 
● Puppet, Chef modules 
27 OpenNebulaConf 2014 Berlin
Get involved 
Evaluate the latest releases: 
http://ceph.com/resources/downloads/ 
Mailing list, IRC: 
http://ceph.com/resources/mailing-list-irc/ 
Bugs: 
http://tracker.ceph.com/projects/ceph/issues 
Online developer summits: 
https://wiki.ceph.com/Planning/CDS 
28 OpenNebulaConf 2014 Berlin
Questions? 
29 OpenNebulaConf 2014 Berlin
30 OpenNebulaConf 2014 Berlin
Spare slides 
31 OpenNebulaConf 2014 Berlin
32 OpenNebulaConf 2014 Berlin
Ceph FS 
33 OpenNebulaConf 2014 Berlin
CephFS architecture 
● Dynamically balanced scale-out metadata 
● Inherit flexibility/scalability of RADOS for data 
● POSIX compatibility 
● Beyond POSIX: Subtree snapshots, recursive statistics 
Weil, Sage A., et al. "Ceph: A scalable, high-performance distributed file 
system." Proceedings of the 7th symposium on Operating systems 
design and implementation. USENIX Association, 2006. 
http://ceph.com/papers/weil-ceph-osdi06.pdf 
34 OpenNebulaConf 2014 Berlin
Components 
● Client: kernel, fuse, libcephfs 
● Server: MDS daemon 
● Storage: RADOS cluster (mons & OSDs) 
35 OpenNebulaConf 2014 Berlin
Components 
Linux host 
ceph.ko 
metadata 01 data 
10 
M M 
M 
Ceph server daemons 
36 OpenNebulaConf 2014 Berlin
From application to disk 
Application 
ceph-fuse libcephfs Kernel client 
ceph-mds 
Client network protocol 
RADOS 
Disk 
37 OpenNebulaConf 2014 Berlin
Scaling out FS metadata 
● Options for distributing metadata? 
– by static subvolume 
– by path hash 
– by dynamic subtree 
● Consider performance, ease of implementation 
38 OpenNebulaConf 2014 Berlin
Dynamic subtree placement 
39 OpenNebulaConf 2014 Berlin
Dynamic subtree placement 
● Locality: get the dentries in a dir from one MDS 
● Support read heavy workloads by replicating non-authoritative 
copies (cached with capabilities just like 
clients do) 
● In practice work at directory fragment level in order to 
handle large dirs 
40 OpenNebulaConf 2014 Berlin
Data placement 
● Stripe file contents across RADOS objects 
● get full rados cluster bw from clients 
● fairly tolerant of object losses: reads return zero 
● Control striping with layout vxattrs 
● layouts also select between multiple data pools 
● Deletion is a special case: client deletions mark files 
'stray', RADOS delete ops sent by MDS 
41 OpenNebulaConf 2014 Berlin
Clients 
● Two implementations: 
● ceph-fuse/libcephfs 
● kclient 
● Interplay with VFS page cache, efficiency harder with 
fuse (extraneous stats etc) 
● Client perf. matters, for single-client workloads 
● Slow client can hold up others if it's hogging metadata 
locks: include clients in troubleshooting 
42 OpenNebulaConf 2014 Berlin
Journaling and caching in MDS 
● Metadata ops initially journaled to striped journal "file" 
in the metadata pool. 
– I/O latency on metadata ops is sum of network 
latency and journal commit latency. 
– Metadata remains pinned in in-memory cache 
until expired from journal. 
43 OpenNebulaConf 2014 Berlin
Journaling and caching in MDS 
● In some workloads we expect almost all metadata 
always in cache, in others its more of a stream. 
● Control cache size with mds_cache_size 
● Cache eviction relies on client cooperation 
● MDS journal replay not only recovers data but also 
warms up cache. Use standby replay to keep that 
cache warm. 
44 OpenNebulaConf 2014 Berlin
Lookup by inode 
● Sometimes we need inode → path mapping: 
● Hard links 
● NFS handles 
● Costly to store this: mitigate by piggybacking paths 
(backtraces) onto data objects 
● Con: storing metadata to data pool 
● Con: extra IOs to set backtraces 
● Pro: disaster recovery from data pool 
● Future: improve backtrace writing latency 
45 OpenNebulaConf 2014 Berlin
CephFS in practice 
ceph-deploy mds create myserver 
ceph osd pool create fs_data 
ceph osd pool create fs_metadata 
ceph fs new myfs fs_metadata fs_data 
mount -t cephfs x.x.x.x:6789 /mnt/ceph 
46 OpenNebulaConf 2014 Berlin
Managing CephFS clients 
● New in giant: see hostnames of connected clients 
● Client eviction is sometimes important: 
● Skip the wait during reconnect phase on MDS restart 
● Allow others to access files locked by crashed client 
● Use OpTracker to inspect ongoing operations 
47 OpenNebulaConf 2014 Berlin
CephFS tips 
● Choose MDS servers with lots of RAM 
● Investigate clients when diagnosing stuck/slow access 
● Use recent Ceph and recent kernel 
● Use a conservative configuration: 
● Single active MDS, plus one standby 
● Dedicated MDS server 
● Kernel client 
● No snapshots, no inline data 
48 OpenNebulaConf 2014 Berlin
Towards a production-ready CephFS 
● Focus on resilience: 
1. Don't corrupt things 
2. Stay up 
3. Handle the corner cases 
4. When something is wrong, tell me 
5. Provide the tools to diagnose and fix problems 
● Achieve this first within a conservative single-MDS 
configuration 
49 OpenNebulaConf 2014 Berlin
Giant->Hammer timeframe 
● Initial online fsck (a.k.a. forward scrub) 
● Online diagnostics (`session ls`, MDS health alerts) 
● Journal resilience & tools (cephfs-journal-tool) 
● flock in the FUSE client 
● Initial soft quota support 
● General resilience: full OSDs, full metadata cache 
50 OpenNebulaConf 2014 Berlin
FSCK and repair 
● Recover from damage: 
● Loss of data objects (which files are damaged?) 
● Loss of metadata objects (what subtree is damaged?) 
● Continuous verification: 
● Are recursive stats consistent? 
● Does metadata on disk match cache? 
● Does file size metadata match data on disk? 
● Repair: 
● Automatic where possible 
● Manual tools to enable support 
51 OpenNebulaConf 2014 Berlin
Client management 
● Current eviction is not 100% safe against rogue clients 
● Update to client protocol to wait for OSD blacklist 
● Client metadata 
● Initially domain name, mount point 
● Extension to other identifiers? 
52 OpenNebulaConf 2014 Berlin
Online diagnostics 
● Bugs exposed relate to failures of one client to release 
resources for another client: “my filesystem is frozen”. 
Introduce new health messages: 
● “client xyz is failing to respond to cache pressure” 
● “client xyz is ignoring capability release messages” 
● Add client metadata to allow us to give domain names 
instead of IP addrs in messages. 
● Opaque behavior in the face of dead clients. Introduce 
`session ls` 
● Which clients does MDS think are stale? 
● Identify clients to evict with `session evict` 
53 OpenNebulaConf 2014 Berlin
Journal resilience 
● Bad journal prevents MDS recovery: “my MDS crashes 
on startup”: 
● Data loss 
● Software bugs 
● Updated on-disk format to make recovery from 
damage easier 
● New tool: cephfs-journal-tool 
● Inspect the journal, search/filter 
● Chop out unwanted entries/regions 
54 OpenNebulaConf 2014 Berlin
Handling resource limits 
● Write a test, see what breaks! 
● Full MDS cache: 
● Require some free memory to make progress 
● Require client cooperation to unpin cache objects 
● Anticipate tuning required for cache behaviour: what 
should we evict? 
● Full OSD cluster 
● Require explicit handling to abort with -ENOSPC 
● MDS → RADOS flow control: 
● Contention between I/O to flush cache and I/O to journal 
55 OpenNebulaConf 2014 Berlin
Test, QA, bug fixes 
● The answer to “Is CephFS production ready?” 
● teuthology test framework: 
● Long running/thrashing test 
● Third party FS correctness tests 
● Python functional tests 
● We dogfood CephFS internally 
● Various kclient fixes discovered 
● Motivation for new health monitoring metrics 
● Third party testing is extremely valuable 
56 OpenNebulaConf 2014 Berlin
What's next? 
● You tell us! 
● Recent survey highlighted: 
● FSCK hardening 
● Multi-MDS hardening 
● Quota support 
● Which use cases will community test with? 
● General purpose 
● Backup 
● Hadoop 
57 OpenNebulaConf 2014 Berlin
Reporting bugs 
● Does the most recent development release or kernel 
fix your issue? 
● What is your configuration? MDS config, Ceph 
version, client version, kclient or fuse 
● What is your workload? 
● Can you reproduce with debug logging enabled? 
http://ceph.com/resources/mailing-list-irc/ 
http://tracker.ceph.com/projects/ceph/issues 
http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/ 
58 OpenNebulaConf 2014 Berlin
Future 
● Ceph Developer Summit: 
● When: 8 October 
● Where: online 
● Post-Hammer work: 
● Recent survey highlighted multi-MDS, quota support 
● Testing with clustered Samba/NFS? 
59 OpenNebulaConf 2014 Berlin
Ad

More Related Content

What's hot (20)

OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...
OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...
OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...
NETWAYS
 
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red HatMultiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
OpenStack
 
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...
NETWAYS
 
Building a Microsoft cloud with open technologies
Building a Microsoft cloud with open technologiesBuilding a Microsoft cloud with open technologies
Building a Microsoft cloud with open technologies
Alessandro Pilotti
 
OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...
OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...
OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...
eNovance
 
Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015
Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015
Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015
Deepak Shetty
 
OpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architectureOpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula Project
 
VietOpenStack meetup 7th High Performance VM
VietOpenStack meetup 7th High Performance VMVietOpenStack meetup 7th High Performance VM
VietOpenStack meetup 7th High Performance VM
Vietnam Open Infrastructure User Group
 
OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...
OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...
OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...
NETWAYS
 
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebulaOpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula Project
 
Open stack in action enovance-quantum in action
Open stack in action enovance-quantum in actionOpen stack in action enovance-quantum in action
Open stack in action enovance-quantum in action
eNovance
 
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
bipin kunal
 
John Spray - Ceph in Kubernetes
John Spray - Ceph in KubernetesJohn Spray - Ceph in Kubernetes
John Spray - Ceph in Kubernetes
ShapeBlue
 
Deploying openstack using ansible
Deploying openstack using ansibleDeploying openstack using ansible
Deploying openstack using ansible
openstackindia
 
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPON
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPONOpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPON
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPON
OpenNebula Project
 
3 ubuntu open_stack_ceph
3 ubuntu open_stack_ceph3 ubuntu open_stack_ceph
3 ubuntu open_stack_ceph
openstackindia
 
OpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick Hamon
OpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick HamonOpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick Hamon
OpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick Hamon
eNovance
 
Openstack Summit HK - Ceph defacto - eNovance
Openstack Summit HK - Ceph defacto - eNovanceOpenstack Summit HK - Ceph defacto - eNovance
Openstack Summit HK - Ceph defacto - eNovance
eNovance
 
OpenNebula - The Project
OpenNebula - The ProjectOpenNebula - The Project
OpenNebula - The Project
OpenNebula Project
 
OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...
OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...
OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...
NETWAYS
 
OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...
OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...
OpenNebula Conf: 2014 | Lightning talk: Managing Docker Containers with OpenN...
NETWAYS
 
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red HatMultiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
OpenStack
 
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...
NETWAYS
 
Building a Microsoft cloud with open technologies
Building a Microsoft cloud with open technologiesBuilding a Microsoft cloud with open technologies
Building a Microsoft cloud with open technologies
Alessandro Pilotti
 
OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...
OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...
OpenStack in Action 4! Vincent Untz - Running multiple hypervisors in your Op...
eNovance
 
Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015
Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015
Ceph & OpenStack talk given @ OpenStack Meetup @ Bangalore, June 2015
Deepak Shetty
 
OpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architectureOpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula Project
 
OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...
OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...
OpenNebula Conf | Lightning talk: Managing a Scientific Computing Facility wi...
NETWAYS
 
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebulaOpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula Project
 
Open stack in action enovance-quantum in action
Open stack in action enovance-quantum in actionOpen stack in action enovance-quantum in action
Open stack in action enovance-quantum in action
eNovance
 
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
bipin kunal
 
John Spray - Ceph in Kubernetes
John Spray - Ceph in KubernetesJohn Spray - Ceph in Kubernetes
John Spray - Ceph in Kubernetes
ShapeBlue
 
Deploying openstack using ansible
Deploying openstack using ansibleDeploying openstack using ansible
Deploying openstack using ansible
openstackindia
 
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPON
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPONOpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPON
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPON
OpenNebula Project
 
3 ubuntu open_stack_ceph
3 ubuntu open_stack_ceph3 ubuntu open_stack_ceph
3 ubuntu open_stack_ceph
openstackindia
 
OpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick Hamon
OpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick HamonOpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick Hamon
OpenStack in Action! 5 - Dell - OpenStack powered solutions - Patrick Hamon
eNovance
 
Openstack Summit HK - Ceph defacto - eNovance
Openstack Summit HK - Ceph defacto - eNovanceOpenstack Summit HK - Ceph defacto - eNovance
Openstack Summit HK - Ceph defacto - eNovance
eNovance
 
OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...
OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...
OpenNebula Conf 2014 | Building Hybrid Cloud Federated Environments with Open...
NETWAYS
 

Viewers also liked (12)

OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...
OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...
OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...
NETWAYS
 
OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...
OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...
OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...
OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...
OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...
OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...
OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...
NETWAYS
 
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio Llorente
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio LlorenteOpenNebula Conf 2014 | State and future of OpenNebula - Ignacio Llorente
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio Llorente
NETWAYS
 
OpenNebula Conf 2014 | Puppet and OpenNebula - David Lutterkort
OpenNebula Conf 2014 | Puppet and OpenNebula - David LutterkortOpenNebula Conf 2014 | Puppet and OpenNebula - David Lutterkort
OpenNebula Conf 2014 | Puppet and OpenNebula - David Lutterkort
NETWAYS
 
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph Galuschka
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaOpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph Galuschka
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph Galuschka
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...
OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...
OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...
NETWAYS
 
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan Horacek
OpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan HoracekOpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan Horacek
OpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan Horacek
NETWAYS
 
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...
NETWAYS
 
OpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz Diaz
OpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz DiazOpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz Diaz
OpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz Diaz
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...
OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...
OpenNebula Conf 2014 | Lightning talk: A brief introduction to Cloud Catalyst...
NETWAYS
 
OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...
OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...
OpenNebula Conf 2014 | Deploying OpenNebula in a Snap using Configuration Man...
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...
OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...
OpenNebula Conf 2014 | Lightning talk: Proactive Autonomic Management Feature...
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...
OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...
OpenNebula Conf 2014 | Lightning talk: OpenNebula Puppet Module - Norman Mess...
NETWAYS
 
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio Llorente
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio LlorenteOpenNebula Conf 2014 | State and future of OpenNebula - Ignacio Llorente
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio Llorente
NETWAYS
 
OpenNebula Conf 2014 | Puppet and OpenNebula - David Lutterkort
OpenNebula Conf 2014 | Puppet and OpenNebula - David LutterkortOpenNebula Conf 2014 | Puppet and OpenNebula - David Lutterkort
OpenNebula Conf 2014 | Puppet and OpenNebula - David Lutterkort
NETWAYS
 
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph Galuschka
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaOpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph Galuschka
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph Galuschka
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...
OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...
OpenNebula Conf 2014 | Lightning talk: Cloud in a box - Megam by Varadarajan ...
NETWAYS
 
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...
NETWAYS
 
OpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan Horacek
OpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan HoracekOpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan Horacek
OpenNebula Conf 2014 | Lightning talk: OpenNebula at Etnetera by Jan Horacek
NETWAYS
 
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...
NETWAYS
 
OpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz Diaz
OpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz DiazOpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz Diaz
OpenNebula Conf 2014 | OpenNebula at Cenatic - Jose Angel Diaz Diaz
NETWAYS
 
Ad

Similar to OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula by John Spray (20)

Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
Ceph Community
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
OpenStack
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
NETWAYS
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 
DevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform SimulationsDevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform Simulations
Jeremy Eder
 
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise KubernetesMongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
Marcel Hergaarden
 
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
TomBarron
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red_Hat_Storage
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
Colleen Corrice
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
Red_Hat_Storage
 
High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...
High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...
High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...
PutraChandra7
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Community
 
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdfOpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
ssuser9e06a61
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
Ceph Community
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installation
Robert Bohne
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
ktdreyer
 
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 DemoNFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
ManageIQ
 
Red hat ceph storage customer presentation
Red hat ceph storage customer presentationRed hat ceph storage customer presentation
Red hat ceph storage customer presentation
Rodrigo Missiaggia
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
Ceph Community
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
OpenStack
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
NETWAYS
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 
DevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform SimulationsDevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform Simulations
Jeremy Eder
 
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise KubernetesMongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
Marcel Hergaarden
 
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
TomBarron
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red_Hat_Storage
 
High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...
High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...
High%20Level%20-%20OpenShift%204%20Technical%20Deep%20Dive%20-%202024%20-%20I...
PutraChandra7
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Community
 
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdfOpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
ssuser9e06a61
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
Ceph Community
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installation
Robert Bohne
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
ktdreyer
 
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 DemoNFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
ManageIQ
 
Red hat ceph storage customer presentation
Red hat ceph storage customer presentationRed hat ceph storage customer presentation
Red hat ceph storage customer presentation
Rodrigo Missiaggia
 
Ad

Recently uploaded (20)

Designing AI-Powered APIs on Azure: Best Practices& Considerations
Designing AI-Powered APIs on Azure: Best Practices& ConsiderationsDesigning AI-Powered APIs on Azure: Best Practices& Considerations
Designing AI-Powered APIs on Azure: Best Practices& Considerations
Dinusha Kumarasiri
 
Download YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full ActivatedDownload YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full Activated
saniamalik72555
 
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
Andre Hora
 
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)
Andre Hora
 
Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]
saniaaftab72555
 
Revolutionizing Residential Wi-Fi PPT.pptx
Revolutionizing Residential Wi-Fi PPT.pptxRevolutionizing Residential Wi-Fi PPT.pptx
Revolutionizing Residential Wi-Fi PPT.pptx
nidhisingh691197
 
Top 10 Client Portal Software Solutions for 2025.docx
Top 10 Client Portal Software Solutions for 2025.docxTop 10 Client Portal Software Solutions for 2025.docx
Top 10 Client Portal Software Solutions for 2025.docx
Portli
 
WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)
sh607827
 
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRYLEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
NidaFarooq10
 
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage DashboardsAdobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
BradBedford3
 
Not So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java WebinarNot So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java Webinar
Tier1 app
 
EASEUS Partition Master Crack + License Code
EASEUS Partition Master Crack + License CodeEASEUS Partition Master Crack + License Code
EASEUS Partition Master Crack + License Code
aneelaramzan63
 
How can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptxHow can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptx
laravinson24
 
Exploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the FutureExploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the Future
ICS
 
The Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdfThe Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdf
drewplanas10
 
Adobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest VersionAdobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest Version
kashifyounis067
 
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Lionel Briand
 
Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.
Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.
Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.
Dele Amefo
 
Maxon CINEMA 4D 2025 Crack FREE Download LINK
Maxon CINEMA 4D 2025 Crack FREE Download LINKMaxon CINEMA 4D 2025 Crack FREE Download LINK
Maxon CINEMA 4D 2025 Crack FREE Download LINK
younisnoman75
 
Landscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature ReviewLandscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature Review
Hironori Washizaki
 
Designing AI-Powered APIs on Azure: Best Practices& Considerations
Designing AI-Powered APIs on Azure: Best Practices& ConsiderationsDesigning AI-Powered APIs on Azure: Best Practices& Considerations
Designing AI-Powered APIs on Azure: Best Practices& Considerations
Dinusha Kumarasiri
 
Download YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full ActivatedDownload YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full Activated
saniamalik72555
 
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...
Andre Hora
 
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)
Exceptional Behaviors: How Frequently Are They Tested? (AST 2025)
Andre Hora
 
Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]
saniaaftab72555
 
Revolutionizing Residential Wi-Fi PPT.pptx
Revolutionizing Residential Wi-Fi PPT.pptxRevolutionizing Residential Wi-Fi PPT.pptx
Revolutionizing Residential Wi-Fi PPT.pptx
nidhisingh691197
 
Top 10 Client Portal Software Solutions for 2025.docx
Top 10 Client Portal Software Solutions for 2025.docxTop 10 Client Portal Software Solutions for 2025.docx
Top 10 Client Portal Software Solutions for 2025.docx
Portli
 
WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)WinRAR Crack for Windows (100% Working 2025)
WinRAR Crack for Windows (100% Working 2025)
sh607827
 
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRYLEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
NidaFarooq10
 
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage DashboardsAdobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
Adobe Marketo Engage Champion Deep Dive - SFDC CRM Synch V2 & Usage Dashboards
BradBedford3
 
Not So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java WebinarNot So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java Webinar
Tier1 app
 
EASEUS Partition Master Crack + License Code
EASEUS Partition Master Crack + License CodeEASEUS Partition Master Crack + License Code
EASEUS Partition Master Crack + License Code
aneelaramzan63
 
How can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptxHow can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptx
laravinson24
 
Exploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the FutureExploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the Future
ICS
 
The Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdfThe Significance of Hardware in Information Systems.pdf
The Significance of Hardware in Information Systems.pdf
drewplanas10
 
Adobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest VersionAdobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest Version
kashifyounis067
 
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Lionel Briand
 
Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.
Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.
Salesforce Data Cloud- Hyperscale data platform, built for Salesforce.
Dele Amefo
 
Maxon CINEMA 4D 2025 Crack FREE Download LINK
Maxon CINEMA 4D 2025 Crack FREE Download LINKMaxon CINEMA 4D 2025 Crack FREE Download LINK
Maxon CINEMA 4D 2025 Crack FREE Download LINK
younisnoman75
 
Landscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature ReviewLandscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature Review
Hironori Washizaki
 

OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula by John Spray

  • 1. Using Ceph with OpenNebula John Spray [email protected]
  • 2. Agenda ● What is it? ● Architecture ● Integration with OpenNebula ● What's new? 2 OpenNebulaConf 2014 Berlin
  • 3. What is Ceph? 3 OpenNebulaConf 2014 Berlin
  • 4. What is Ceph? ● Highly available resilient data store ● Free Software (LGPL) ● 10 years since inception ● Flexible object, block and filesystem interfaces ● Especially popular in private clouds as VM image service, and S3-compatible object storage service. 4 OpenNebulaConf 2014 Berlin
  • 5. Interfaces to storage S3 & Swift Multi-tenant Snapshots Clones 5 OpenNebulaConf 2014 Berlin FILE SYSTEM CephFS BLOCK STORAGE RBD OBJECT STORAGE RGW Keystone Geo-Replication Native API OpenStack Linux Kernel iSCSI POSIX Linux Kernel CIFS/NFS HDFS Distributed Metadata
  • 6. Ceph Architecture 6 OpenNebulaConf 2014 Berlin
  • 7. Architectural Components APP HOST/VM CLIENT RGW A web services gateway for object storage, compatible with S3 and Swift RBD A reliable, fully-distributed block device with cloud platform integration LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors 7 OpenNebulaConf 2014 Berlin CEPHFS A distributed file system with POSIX semantics and scale-out metadata management
  • 8. Object Storage Daemons OSD FS DISK OSD FS DISK OSD FS DISK OSD FS DISK btrfs xfs ext4 8 OpenNebulaConf 2014 Berlin M M M
  • 9. RADOS Components OSDs:  10s to 10000s in a cluster  One per disk (or one per SSD, RAID group…)  Serve stored objects to clients  Intelligently peer for replication & recovery Monitors:  Maintain cluster membership and state  Provide consensus for distributed decision-making  Small, odd number  These do not serve stored objects to clients M 9 OpenNebulaConf 2014 Berlin
  • 10. Rados Cluster APPLICATION M M M M M RADOS CLUSTER 10 OpenNebulaConf 2014 Berlin
  • 11. Where do objects live? ?? APPLICATION 11 OpenNebulaConf 2014 Berlin M M M OBJECT
  • 12. A Metadata Server? 1 APPLICATION 12 OpenNebulaConf 2014 Berlin M M M 2
  • 13. Calculated placement APPLICATION F 13 OpenNebulaConf 2014 Berlin M M M A-G H-N O-T U-Z
  • 14. Even better: CRUSH 14 OpenNebulaConf 2014 Berlin 01 11 11 01 RADOS CLUSTER OBJECT 10 01 01 10 10 01 11 01 10 01 01 10 10 01 01 10 10 10 01 01
  • 15. CRUSH is a quick calculation 15 OpenNebulaConf 2014 Berlin 01 11 11 01 RADOS CLUSTER OBJECT 10 01 01 10 10 01 01 10 10 10 01 01
  • 16. CRUSH: Dynamic data placement CRUSH:  Pseudo-random placement algorithm  Fast calculation, no lookup  Repeatable, deterministic  Statistically uniform distribution  Stable mapping  Limited data migration on change  Rule-based configuration  Infrastructure topology aware  Adjustable replication  Weighting 16 OpenNebulaConf 2014 Berlin
  • 17. Architectural Components APP HOST/VM CLIENT RGW A web services gateway for object storage, compatible with S3 and Swift RBD A reliable, fully-distributed block device with cloud platform integration LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors 17 OpenNebulaConf 2014 Berlin CEPHFS A distributed file system with POSIX semantics and scale-out metadata management
  • 18. RBD: Virtual disks in Ceph 18 OpenNebulaConf 2014 Berlin 18 RADOS BLOCK DEVICE:  Storage of disk images in RADOS  Decouples VMs from host  Images are striped across the cluster (pool)  Snapshots  Copy-on-write clones  Support in:  Mainline Linux Kernel (2.6.39+)  Qemu/KVM  OpenStack, CloudStack, OpenNebula, Proxmox
  • 19. Storing virtual disks VM HYPERVISOR LIBRBD M M RADOS CLUSTER 19 19 OpenNebulaConf 2014 Berlin
  • 20. Using Ceph with OpenNebula 20 OpenNebulaConf 2014 Berlin
  • 21. Storage in OpenNebula deployments OpenNebula Cloud Architecture Survey 2014 (http://c12g.com/resources/survey/) 21 OpenNebulaConf 2014 Berlin
  • 22. RBD and libvirt/qemu ● librbd (user space) client integration with libvirt/qemu ● Support for live migration, thin clones ● Get recent versions! ● Directly supported in OpenNebula since 4.0 with the Ceph Datastore (wraps `rbd` CLI) More info online: http://ceph.com/docs/master/rbd/libvirt/ http://docs.opennebula.org/4.10/administration/storage/ceph_ds.html 22 OpenNebulaConf 2014 Berlin
  • 23. Other hypervisors ● OpenNebula is flexible, so can we also use Ceph with non-libvirt/qemu hypervisors? ● Kernel RBD: can present RBD images in /dev/ on hypervisor host for software unaware of librbd ● Docker: can exploit RBD volumes with a local filesystem for use as data volumes – maybe CephFS in future...? ● For unsupported hypervisors, can adapt to Ceph using e.g. iSCSI for RBD, or NFS for CephFS (but test re-exports carefully!) 23 OpenNebulaConf 2014 Berlin
  • 24. Choosing hardware Testing/benchmarking/expert advice is needed, but there are general guidelines: ● Prefer many cheap nodes to few expensive nodes (10 is better than 3) ● Include small but fast SSDs for OSD journals ● Don't simply buy biggest drives: consider IOPs/capacity ratio ● Provision network and IO capacity sufficient for your workload plus recovery bandwidth from node failure. 24 OpenNebulaConf 2014 Berlin
  • 25. What's new? 25 OpenNebulaConf 2014 Berlin
  • 26. Ceph releases ● Ceph 0.80 firefly (May 2014) – Cache tiering & erasure coding – Key/val OSD backends – OSD primary affinity ● Ceph 0.87 giant (October 2014) – RBD cache enabled by default – Performance improvements – Locally recoverable erasure codes ● Ceph x.xx hammer (2015) 26 OpenNebulaConf 2014 Berlin
  • 27. Additional components ● Ceph FS – scale-out POSIX filesystem service, currently being stabilized ● Calamari – monitoring dashboard for Ceph ● ceph-deploy – easy SSH-based deployment tool ● Puppet, Chef modules 27 OpenNebulaConf 2014 Berlin
  • 28. Get involved Evaluate the latest releases: http://ceph.com/resources/downloads/ Mailing list, IRC: http://ceph.com/resources/mailing-list-irc/ Bugs: http://tracker.ceph.com/projects/ceph/issues Online developer summits: https://wiki.ceph.com/Planning/CDS 28 OpenNebulaConf 2014 Berlin
  • 31. Spare slides 31 OpenNebulaConf 2014 Berlin
  • 33. Ceph FS 33 OpenNebulaConf 2014 Berlin
  • 34. CephFS architecture ● Dynamically balanced scale-out metadata ● Inherit flexibility/scalability of RADOS for data ● POSIX compatibility ● Beyond POSIX: Subtree snapshots, recursive statistics Weil, Sage A., et al. "Ceph: A scalable, high-performance distributed file system." Proceedings of the 7th symposium on Operating systems design and implementation. USENIX Association, 2006. http://ceph.com/papers/weil-ceph-osdi06.pdf 34 OpenNebulaConf 2014 Berlin
  • 35. Components ● Client: kernel, fuse, libcephfs ● Server: MDS daemon ● Storage: RADOS cluster (mons & OSDs) 35 OpenNebulaConf 2014 Berlin
  • 36. Components Linux host ceph.ko metadata 01 data 10 M M M Ceph server daemons 36 OpenNebulaConf 2014 Berlin
  • 37. From application to disk Application ceph-fuse libcephfs Kernel client ceph-mds Client network protocol RADOS Disk 37 OpenNebulaConf 2014 Berlin
  • 38. Scaling out FS metadata ● Options for distributing metadata? – by static subvolume – by path hash – by dynamic subtree ● Consider performance, ease of implementation 38 OpenNebulaConf 2014 Berlin
  • 39. Dynamic subtree placement 39 OpenNebulaConf 2014 Berlin
  • 40. Dynamic subtree placement ● Locality: get the dentries in a dir from one MDS ● Support read heavy workloads by replicating non-authoritative copies (cached with capabilities just like clients do) ● In practice work at directory fragment level in order to handle large dirs 40 OpenNebulaConf 2014 Berlin
  • 41. Data placement ● Stripe file contents across RADOS objects ● get full rados cluster bw from clients ● fairly tolerant of object losses: reads return zero ● Control striping with layout vxattrs ● layouts also select between multiple data pools ● Deletion is a special case: client deletions mark files 'stray', RADOS delete ops sent by MDS 41 OpenNebulaConf 2014 Berlin
  • 42. Clients ● Two implementations: ● ceph-fuse/libcephfs ● kclient ● Interplay with VFS page cache, efficiency harder with fuse (extraneous stats etc) ● Client perf. matters, for single-client workloads ● Slow client can hold up others if it's hogging metadata locks: include clients in troubleshooting 42 OpenNebulaConf 2014 Berlin
  • 43. Journaling and caching in MDS ● Metadata ops initially journaled to striped journal "file" in the metadata pool. – I/O latency on metadata ops is sum of network latency and journal commit latency. – Metadata remains pinned in in-memory cache until expired from journal. 43 OpenNebulaConf 2014 Berlin
  • 44. Journaling and caching in MDS ● In some workloads we expect almost all metadata always in cache, in others its more of a stream. ● Control cache size with mds_cache_size ● Cache eviction relies on client cooperation ● MDS journal replay not only recovers data but also warms up cache. Use standby replay to keep that cache warm. 44 OpenNebulaConf 2014 Berlin
  • 45. Lookup by inode ● Sometimes we need inode → path mapping: ● Hard links ● NFS handles ● Costly to store this: mitigate by piggybacking paths (backtraces) onto data objects ● Con: storing metadata to data pool ● Con: extra IOs to set backtraces ● Pro: disaster recovery from data pool ● Future: improve backtrace writing latency 45 OpenNebulaConf 2014 Berlin
  • 46. CephFS in practice ceph-deploy mds create myserver ceph osd pool create fs_data ceph osd pool create fs_metadata ceph fs new myfs fs_metadata fs_data mount -t cephfs x.x.x.x:6789 /mnt/ceph 46 OpenNebulaConf 2014 Berlin
  • 47. Managing CephFS clients ● New in giant: see hostnames of connected clients ● Client eviction is sometimes important: ● Skip the wait during reconnect phase on MDS restart ● Allow others to access files locked by crashed client ● Use OpTracker to inspect ongoing operations 47 OpenNebulaConf 2014 Berlin
  • 48. CephFS tips ● Choose MDS servers with lots of RAM ● Investigate clients when diagnosing stuck/slow access ● Use recent Ceph and recent kernel ● Use a conservative configuration: ● Single active MDS, plus one standby ● Dedicated MDS server ● Kernel client ● No snapshots, no inline data 48 OpenNebulaConf 2014 Berlin
  • 49. Towards a production-ready CephFS ● Focus on resilience: 1. Don't corrupt things 2. Stay up 3. Handle the corner cases 4. When something is wrong, tell me 5. Provide the tools to diagnose and fix problems ● Achieve this first within a conservative single-MDS configuration 49 OpenNebulaConf 2014 Berlin
  • 50. Giant->Hammer timeframe ● Initial online fsck (a.k.a. forward scrub) ● Online diagnostics (`session ls`, MDS health alerts) ● Journal resilience & tools (cephfs-journal-tool) ● flock in the FUSE client ● Initial soft quota support ● General resilience: full OSDs, full metadata cache 50 OpenNebulaConf 2014 Berlin
  • 51. FSCK and repair ● Recover from damage: ● Loss of data objects (which files are damaged?) ● Loss of metadata objects (what subtree is damaged?) ● Continuous verification: ● Are recursive stats consistent? ● Does metadata on disk match cache? ● Does file size metadata match data on disk? ● Repair: ● Automatic where possible ● Manual tools to enable support 51 OpenNebulaConf 2014 Berlin
  • 52. Client management ● Current eviction is not 100% safe against rogue clients ● Update to client protocol to wait for OSD blacklist ● Client metadata ● Initially domain name, mount point ● Extension to other identifiers? 52 OpenNebulaConf 2014 Berlin
  • 53. Online diagnostics ● Bugs exposed relate to failures of one client to release resources for another client: “my filesystem is frozen”. Introduce new health messages: ● “client xyz is failing to respond to cache pressure” ● “client xyz is ignoring capability release messages” ● Add client metadata to allow us to give domain names instead of IP addrs in messages. ● Opaque behavior in the face of dead clients. Introduce `session ls` ● Which clients does MDS think are stale? ● Identify clients to evict with `session evict` 53 OpenNebulaConf 2014 Berlin
  • 54. Journal resilience ● Bad journal prevents MDS recovery: “my MDS crashes on startup”: ● Data loss ● Software bugs ● Updated on-disk format to make recovery from damage easier ● New tool: cephfs-journal-tool ● Inspect the journal, search/filter ● Chop out unwanted entries/regions 54 OpenNebulaConf 2014 Berlin
  • 55. Handling resource limits ● Write a test, see what breaks! ● Full MDS cache: ● Require some free memory to make progress ● Require client cooperation to unpin cache objects ● Anticipate tuning required for cache behaviour: what should we evict? ● Full OSD cluster ● Require explicit handling to abort with -ENOSPC ● MDS → RADOS flow control: ● Contention between I/O to flush cache and I/O to journal 55 OpenNebulaConf 2014 Berlin
  • 56. Test, QA, bug fixes ● The answer to “Is CephFS production ready?” ● teuthology test framework: ● Long running/thrashing test ● Third party FS correctness tests ● Python functional tests ● We dogfood CephFS internally ● Various kclient fixes discovered ● Motivation for new health monitoring metrics ● Third party testing is extremely valuable 56 OpenNebulaConf 2014 Berlin
  • 57. What's next? ● You tell us! ● Recent survey highlighted: ● FSCK hardening ● Multi-MDS hardening ● Quota support ● Which use cases will community test with? ● General purpose ● Backup ● Hadoop 57 OpenNebulaConf 2014 Berlin
  • 58. Reporting bugs ● Does the most recent development release or kernel fix your issue? ● What is your configuration? MDS config, Ceph version, client version, kclient or fuse ● What is your workload? ● Can you reproduce with debug logging enabled? http://ceph.com/resources/mailing-list-irc/ http://tracker.ceph.com/projects/ceph/issues http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/ 58 OpenNebulaConf 2014 Berlin
  • 59. Future ● Ceph Developer Summit: ● When: 8 October ● Where: online ● Post-Hammer work: ● Recent survey highlighted multi-MDS, quota support ● Testing with clustered Samba/NFS? 59 OpenNebulaConf 2014 Berlin