Talk that showcases the advantages of using Ceph as the storage of choice in openstack. It shows how Ceph integrates with all openstack storage services and the adv of using Ceph as __the__ Unified Storage solution for Openstack
How to Survive an OpenStack Cloud Meltdown with CephSean Cohen
What if you lost your datacenter completely in a catastrophe, but your users hardly noticed? Sounds like a mirage, but it’s absolutely possible.
This talk will showcase OpenStack features enabling multisite and disaster recovery functionalities. We’ll present the latest capabilities of OpenStack and Ceph for Volume and Image Replication using Ceph Block and Object as the backend storage solution, as well as look at the future developments they are driving to improve and simplify the relevant architecture use cases, such as Distributed NFV, an emerging use case that rationalizes your IT by using less control planes and allows you to spread your VNF on multiple datacenters and edge deployments.
In this session you will learn about wew OpenStack features enabling Multisite and distributed deployments, as well as review key use cases, architecture design and best practices to help operations avoid the OpenStack cloud Meltdown nightmare.
https://youtu.be/n2S7uNC_KMw
https://goo.gl/cRNGBK
When disaster strikes the cloud: Who, what, when, where and how to recoverSean Cohen
Enterprise applications needs to be able to survive large scale disasters. While some born-on-the-cloud applications have built-in disaster recovery functionality, non-born-on-the-cloud enterprise applications typically expect the infrastructure to provide disaster recovery support. OpenStack provides various building blocks that enable an OpenStack application to survive a disaster; these building blocks are being improved in Juno and Kilo. Some of these building blocks need to be enabled by the OpenStack cloud administrator and others need to be leveraged by the application deployer. In this presentation, we will review basic disaster recovery concepts covering when, where, and what is done at each stage of the application cloud life-cycle. We will describe the existing building blocks and we will explain the roles of cloud administrator and the cloud end-user, in enabling OpenStack applications to survive a disaster. We will then detail new features in Juno and coming in Kilo that will help enhance OpenStack's disaster recovery support. We will conclude by detailing the remaining gaps and present some tools that address these gaps, allowing an application to survive a disaster when running on an OpenStack cloud.
OpenStack Summit Session: https://youtu.be/Dj5sELG9keE
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephSean Cohen
IT organizations require a disaster recovery strategy addressing outages with loss of storage, or extended loss of availability at the primary site. Applications need to rapidly migrate to the secondary site and transition with little or no impact to their availability.This talk will cover the various architectural options and levels of maturity in OpenStack services for building multi-site configurations using the Mitaka release. We’ll present the latest capabilities for Volume, Image and Object Storage with Ceph as the backend storage solution, and look at the future developments the OpenStack and Ceph communities are driving to improve and simplify the relevant use cases.
Slides from OpenStack Austin Summit 2016 session: http://alturl.com/hpesz
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStackSean Cohen
Ceph is the most widely deployed storage technology used with OpenStack, most often because it's an open source, massively scalable, unified software-defined storage solution. Its popularity is also due to its unique and optimized technical integration with the OpenStack services and its pure-software approach to scaling. In this session, we'll review how Ceph is integrated into Nova, Glance, Keystone, Cinder, and Manila and demonstrate why using traditional storage products won’t give you the full benefits of an elastic cloud infrastructure. We’ll also cover the flexible deployment options, available through Red Hat Enterprise Linux OpenStack Platform and Red Hat Ceph Storage, for seamless operations and key scenarios like disaster recovery. We'll discuss architectural options for deploying a multisite OpenStack cluster and cover the varying levels of maturity in the OpenStack services for configuring multisite. This session will also show how other technologies are using OpenStack Ceph to increase performance and reduce power consumption, such as Intel SSDs. This will include reference architectures and best practices for Ceph and SSDs.
Openstack Summit HK - Ceph defacto - eNovanceeNovance
Sébastien Han presented on Ceph and its integration with OpenStack. Ceph is an open source distributed storage system that is well-suited for OpenStack deployments due to its self-managing capabilities and ability to scale storage resources easily. The integration between Ceph and OpenStack has improved significantly in recent OpenStack releases like Havana, with features like Cinder backup to Ceph and the ability to boot Nova instances using RBD images. Further integration work is planned for upcoming releases to fully leverage Ceph's capabilities.
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site MirroringJason Dillaman
This document introduces Ceph's multi-site mirroring feature for disaster recovery of RBD block storage. It uses a journal-based approach to log all image modifications and asynchronously replicate the journal across data centers. The rbd-mirror daemon is responsible for replaying the journal on remote clusters to achieve consistency. This allows RBD workloads to seamlessly failover and failback between sites, providing protection against data center failures.
Containers package code and runtime dependencies to offer greater portability to cloud-native applications. But containers are ephemeral by design. If containers fail, stateful applications lose all of their data, leaving your enterprise open to the risk of lost revenue and lower customer satisfaction.
As you consider deploying containers in production, you'll need enterprise-calibre persistent storage that's scalable, secure, and container-aware.
Red Hat is the ideal provider for versatile, multi-purpose storage for containerized applications. Red Hat offers storage for containers, letting you attach modern software-defined storage to container platforms or bridge to traditional storage. In addition, Red Hat offers storage in containers, orchestrated by Kubernetes, delivering storage services and applications out of the same containers.
Container native storage reaches a new level of storage capabilities on the OpenShift Container Platform. Container-native storage can now be used for all the key infrastructure pieces of OpenShift: the registry, logging, and metrics services.
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Community
This document summarizes a presentation about Ceph performance given at CephDays Frankfurt 2014. It discusses the good aspects of Ceph's performance including its deterministic object placement strategy and ability to aggregate IOs at the cluster and OSD levels. It also examines the bad, including issues caused by journaling, having the journal and OSD data on the same disk, filesystem fragmentation over time, lack of parallelized reads, and the impact of scrubbing. The ugly is described as multiple objects mapping to the same physical disks, causing sequential streams to get mixed and the disks to seek frequently. The document concludes with suggestions on how to properly build a Ceph cluster and considerations like hardware, data growth planning, and failure tolerance.
2015 open storage workshop ceph software defined storageAndrew Underwood
The document provides an overview of Ceph software-defined storage. It begins with an agenda for an Open Storage Workshop and discusses how the storage market is changing and the limitations of current storage technologies. It then introduces Ceph, describing its architecture including RADOS, CephFS, RBD and RGW. Key benefits of Ceph are scalability, low cost, resilience and extensibility. The document concludes with a case study of Australian research universities using Ceph with OpenStack and next steps to building a scalable storage solution.
Agenda:
What is Software Defined Storage?
What is Ceph?
What is Rook?
Storage for Kubernetes
Storage Classes
Storage on Kubernetes
Operator Pattern
Custom Resource Definition
Rook Operator
Rook architecture
Ceph on Kubernetes with Rook
Demo
Rook Framework for Storage solutions
How to Get Involved?
GlusterFS can be used with Kubernetes in several ways:
1) As a volume driver to provide persistent storage and shared access to data across containers using existing GlusterFS volumes.
2) Through local volumes which use hostPath provisioning to leverage GlusterFS mounts but are not suitable for production.
3) With Heketi which provides dynamic provisioning of GlusterFS volumes through a REST API and integration with Kubernetes.
4) Potentially through Rook which aims to integrate storage services like Ceph and GlusterFS to provide turnkey storage and currently supports Ceph.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Using Rook to Manage Kubernetes Storage with CephCloudOps2005
Moh Ahmed and Raymond Maika presented 'Using Rook to Manage Kubernetes Storage with Ceph' at Montreal's first Cloud Native Day, which took place on June 11 in Montreal.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
This document outlines a course on Ceph storage. It covers Ceph history and components. Key components discussed include Object Storage Daemons (OSDs) that store data, Monitors that maintain the cluster map and provide consensus, and the Ceph journal. Other topics are the Ceph Gateway for object storage access, Ceph Block Device (RBD) for block storage, and CephFS for file storage. The goal is to understand Ceph concepts and deploy a Ceph cluster.
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
Quick & Easy Deployment of a Ceph Storage Cluster with SUSE Enterprise Storage
The document discusses deploying a Ceph storage cluster using SUSE Enterprise Storage. It begins with an introduction to Ceph and how it works as a distributed object storage system. It then covers designing Ceph clusters based on workload needs and measuring performance. The document concludes with step-by-step instructions for deploying a basic three node Ceph cluster with monitoring using SUSE Enterprise Storage.
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
From SUSECon 2015: Smooth integration of emerging Software Defined Storage technologies into traditional Data Center using Fiber Channel and iSCSI as key values for success.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
This document discusses using GlusterFS storage in Kubernetes. It begins with an overview of GlusterFS as a scale-out distributed file system and its interfaces. It then covers Kubernetes storage concepts like StorageClasses, PersistentVolumeClaims (PVC), and PersistentVolumes (PV). It explains that StorageClasses define storage, PVC requests storage and creates a PV, and the PV provides actual mounted storage. It also demonstrates these concepts and shows the workflow of dynamically provisioning GlusterFS volumes in Kubernetes.
The document discusses strategies for optimizing Ceph performance at scale. It describes the presenters' typical node configurations, including storage nodes with 72 HDDs and NVME journals, and monitor/RGW nodes. Various techniques are discussed like ensuring proper NUMA alignment of processes, IRQs, and mount points. General tuning tips include using latest drivers, OS tuning, and addressing network issues. The document stresses that monitors can become overloaded during large rebalances and deleting large pools, so more than one monitor is needed for large clusters.
Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
This document summarizes lessons learned from Target's initial Ceph deployment and subsequent improvements. The initial deployment suffered from poor performance due to using unreliable SATA drives without caching. Instrumentation would have revealed issues sooner. The redesigned deployment used SSD journals and improved hardware, increasing performance 10x. Key lessons are to understand objectives, select suitable hardware, monitor metrics, and not assume Ceph can overcome poor hardware choices. Future work includes all-SSD testing and automating deployments.
Ceph Pacific is a major release of the Ceph distributed storage system scheduled for March 2021. It focuses on five key themes: usability, performance, ecosystem integration, multi-site capabilities, and quality. New features in Pacific include automated upgrades, improved dashboard functionality, snapshot-based CephFS mirroring, per-bucket replication in RGW, and expanded telemetry collection. Looking ahead, the Quincy release will focus on continued improvements in these areas such as resource-aware scheduling in cephadm and multi-site monitoring capabilities.
This document summarizes Dan van der Ster's experience scaling Ceph at CERN. CERN uses Ceph as the backend storage for OpenStack volumes and images, with plans to also use it for physics data archival and analysis. The 3PB Ceph cluster consists of 47 disk servers and 1,128 OSDs. Some lessons learned include managing latency, handling many objects, tuning CRUSH, trusting clients, and avoiding human errors when managing such a large cluster.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
This document discusses Ubuntu OpenStack and Ceph storage. It provides an overview of Ceph, including how it works and its support in OpenStack. Ceph is an open source distributed storage system that provides block, object and file storage. It uses a RADOS distributed object store and can be deployed on commodity hardware. Ceph is fully supported in Ubuntu OpenStack via the Cinder volume service and Glance image service. The document demonstrates how to deploy Ceph using Juju charms to automate configuration and management.
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Community
This document summarizes a presentation about Ceph performance given at CephDays Frankfurt 2014. It discusses the good aspects of Ceph's performance including its deterministic object placement strategy and ability to aggregate IOs at the cluster and OSD levels. It also examines the bad, including issues caused by journaling, having the journal and OSD data on the same disk, filesystem fragmentation over time, lack of parallelized reads, and the impact of scrubbing. The ugly is described as multiple objects mapping to the same physical disks, causing sequential streams to get mixed and the disks to seek frequently. The document concludes with suggestions on how to properly build a Ceph cluster and considerations like hardware, data growth planning, and failure tolerance.
2015 open storage workshop ceph software defined storageAndrew Underwood
The document provides an overview of Ceph software-defined storage. It begins with an agenda for an Open Storage Workshop and discusses how the storage market is changing and the limitations of current storage technologies. It then introduces Ceph, describing its architecture including RADOS, CephFS, RBD and RGW. Key benefits of Ceph are scalability, low cost, resilience and extensibility. The document concludes with a case study of Australian research universities using Ceph with OpenStack and next steps to building a scalable storage solution.
Agenda:
What is Software Defined Storage?
What is Ceph?
What is Rook?
Storage for Kubernetes
Storage Classes
Storage on Kubernetes
Operator Pattern
Custom Resource Definition
Rook Operator
Rook architecture
Ceph on Kubernetes with Rook
Demo
Rook Framework for Storage solutions
How to Get Involved?
GlusterFS can be used with Kubernetes in several ways:
1) As a volume driver to provide persistent storage and shared access to data across containers using existing GlusterFS volumes.
2) Through local volumes which use hostPath provisioning to leverage GlusterFS mounts but are not suitable for production.
3) With Heketi which provides dynamic provisioning of GlusterFS volumes through a REST API and integration with Kubernetes.
4) Potentially through Rook which aims to integrate storage services like Ceph and GlusterFS to provide turnkey storage and currently supports Ceph.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Using Rook to Manage Kubernetes Storage with CephCloudOps2005
Moh Ahmed and Raymond Maika presented 'Using Rook to Manage Kubernetes Storage with Ceph' at Montreal's first Cloud Native Day, which took place on June 11 in Montreal.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
This document outlines a course on Ceph storage. It covers Ceph history and components. Key components discussed include Object Storage Daemons (OSDs) that store data, Monitors that maintain the cluster map and provide consensus, and the Ceph journal. Other topics are the Ceph Gateway for object storage access, Ceph Block Device (RBD) for block storage, and CephFS for file storage. The goal is to understand Ceph concepts and deploy a Ceph cluster.
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
Quick & Easy Deployment of a Ceph Storage Cluster with SUSE Enterprise Storage
The document discusses deploying a Ceph storage cluster using SUSE Enterprise Storage. It begins with an introduction to Ceph and how it works as a distributed object storage system. It then covers designing Ceph clusters based on workload needs and measuring performance. The document concludes with step-by-step instructions for deploying a basic three node Ceph cluster with monitoring using SUSE Enterprise Storage.
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
From SUSECon 2015: Smooth integration of emerging Software Defined Storage technologies into traditional Data Center using Fiber Channel and iSCSI as key values for success.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
This document discusses using GlusterFS storage in Kubernetes. It begins with an overview of GlusterFS as a scale-out distributed file system and its interfaces. It then covers Kubernetes storage concepts like StorageClasses, PersistentVolumeClaims (PVC), and PersistentVolumes (PV). It explains that StorageClasses define storage, PVC requests storage and creates a PV, and the PV provides actual mounted storage. It also demonstrates these concepts and shows the workflow of dynamically provisioning GlusterFS volumes in Kubernetes.
The document discusses strategies for optimizing Ceph performance at scale. It describes the presenters' typical node configurations, including storage nodes with 72 HDDs and NVME journals, and monitor/RGW nodes. Various techniques are discussed like ensuring proper NUMA alignment of processes, IRQs, and mount points. General tuning tips include using latest drivers, OS tuning, and addressing network issues. The document stresses that monitors can become overloaded during large rebalances and deleting large pools, so more than one monitor is needed for large clusters.
Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
This document summarizes lessons learned from Target's initial Ceph deployment and subsequent improvements. The initial deployment suffered from poor performance due to using unreliable SATA drives without caching. Instrumentation would have revealed issues sooner. The redesigned deployment used SSD journals and improved hardware, increasing performance 10x. Key lessons are to understand objectives, select suitable hardware, monitor metrics, and not assume Ceph can overcome poor hardware choices. Future work includes all-SSD testing and automating deployments.
Ceph Pacific is a major release of the Ceph distributed storage system scheduled for March 2021. It focuses on five key themes: usability, performance, ecosystem integration, multi-site capabilities, and quality. New features in Pacific include automated upgrades, improved dashboard functionality, snapshot-based CephFS mirroring, per-bucket replication in RGW, and expanded telemetry collection. Looking ahead, the Quincy release will focus on continued improvements in these areas such as resource-aware scheduling in cephadm and multi-site monitoring capabilities.
This document summarizes Dan van der Ster's experience scaling Ceph at CERN. CERN uses Ceph as the backend storage for OpenStack volumes and images, with plans to also use it for physics data archival and analysis. The 3PB Ceph cluster consists of 47 disk servers and 1,128 OSDs. Some lessons learned include managing latency, handling many objects, tuning CRUSH, trusting clients, and avoiding human errors when managing such a large cluster.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
This document discusses Ubuntu OpenStack and Ceph storage. It provides an overview of Ceph, including how it works and its support in OpenStack. Ceph is an open source distributed storage system that provides block, object and file storage. It uses a RADOS distributed object store and can be deployed on commodity hardware. Ceph is fully supported in Ubuntu OpenStack via the Cinder volume service and Glance image service. The document demonstrates how to deploy Ceph using Juju charms to automate configuration and management.
The document summarizes Qihoo 360's experience deploying Ceph for storage at scale. They use Ceph RBD for virtual machine images and CephFS for a shared file system. For Ceph RBD, they have over 500 nodes across 30+ clusters storing over 1000 object storage devices. They use both full SSD and hybrid SSD/HDD clusters depending on performance needs. Their experience highlights best practices for deployment, performance, stability and operations. For CephFS, they evaluated metadata performance and discussed considerations for a production deployment.
This document summarizes what's new in Ceph. Key updates include improved management and usability features like simplified configuration, hands-off operation, and device health tracking. It also covers new orchestrator capabilities for Kubernetes and container platforms, continued performance optimizations, and multi-cloud capabilities like object storage federation across data centers and clouds.
Cisco: Cassandra adoption on Cisco UCS & OpenStackDataStax Academy
n this talk we will address how we developed our Cassandra environments utilizing Cisco UCS Open Stack Platform with the DataStax Enterprise Edition software. In addition we are utilizing OpenSource CEPH storage in our Infrastructure to optimize the Performance and reduce the costs.
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
Hyper Converged PLCloud with CEPH
This document discusses PowerLeader Cloud (PLCloud), a cloud computing platform that uses a hyper-converged infrastructure with OpenStack, Docker, and Ceph. It provides an overview of PLCloud and how it has adopted OpenStack, Ceph, and other open source technologies. It then describes PLCloud's hyper-converged architecture and how it leverages OpenStack, Docker, and Ceph. Finally, it discusses a specific use case where Ceph RADOS Gateway is used for media storage and access in PLCloud.
Rook.io is an open source cloud-native storage orchestrator for Kubernetes that supports various storage providers like Ceph and allows provisioning of block storage, object storage, and shared file systems using CRDs; it provides a framework to deploy and manage different storage systems on Kubernetes including Ceph which is a highly scalable distributed storage solution for production environments. The presenter demonstrates configuring and deploying Rook operator for Ceph which manages the Ceph components and allows dynamically growing or replacing the underlying storage infrastructure.
Rook.io is an open source cloud-native storage orchestrator for Kubernetes that supports various storage providers like Ceph and allows provisioning of block storage, object storage, and shared file systems using CRDs; it provides a framework to deploy and manage different storage systems on Kubernetes including Ceph which is a highly scalable distributed storage solution for block storage, object storage, and file systems that can be run using Kubernetes primitives. The presenter demonstrates configuring and deploying Rook operator for Ceph which manages the Ceph cluster through CRDs to provide storage for applications like MySQL.
A Storage Orchestrator for Kubernetes
Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.
Rook uses the power of the Kubernetes platform to deliver its services: cloud-native container management, scheduling, and orchestration.
Kubernetes Stateful Workloads on Legacy StorageAkhil Mohan
Slides presented at DevConf'19, India. Brief description of how storage devices can be abstracted in kubernetes using Node-Storage-Device-Manager from OpenEBS, a CNCF sandbox project
Ceph storage for ocp deploying and managing ceph on top of open shift conta...OrFriedmann
Ceph is an open-source software-defined storage solution that provides unified block, file, and object storage. It uses a distributed cluster of storage nodes and microservice daemons to store and retrieve data with no single point of failure. Rook is an open-source project that provides storage orchestration for Kubernetes and allows easy deployment and management of Ceph clusters on Kubernetes through custom resources like CephCluster, CephBlockPool, and CephObjectStore.
This document provides an overview and summary of Red Hat Storage and Inktank Ceph. It discusses Red Hat acquiring Inktank Ceph in April 2014 and the future of Red Hat Storage having two flavors - Gluster edition and Ceph edition. Key features of Red Hat Storage 3.0 include enhanced data protection with snapshots, cluster monitoring, and deep Hadoop integration. The document also introduces Inktank Ceph Enterprise v1.2 and discusses Ceph components like RADOS, LIBRADOS, RBD, RGW and how Ceph can be used with OpenStack.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
This document summarizes new features and upcoming releases for Ceph. In the Jewel release in April 2016, CephFS became more stable with improvements to repair and disaster recovery tools. The BlueStore backend was introduced experimentally to replace Filestore. Future releases Kraken and Luminous will include multi-active MDS support for CephFS, erasure code overwrites for RBD, management tools, and continued optimizations for performance and scalability.
This document discusses using Ceph block storage (RBD) with Apache CloudStack for distributed storage. Ceph provides block-level storage that scales for performance and capacity like SAN storage, addressing the need for EBS-like storage across availability zones. CloudStack currently uses local disk or requires separate storage resources per hypervisor, but using Ceph's distributed RBD allows datacenter-wide storage and removes constraints. Upcoming support in CloudStack includes format 2 RBD, snapshots, datacenter-wide storage resources, and removal of legacy storage dependencies.
The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!
OpenStack and Ceph: the Winning Pair
By: Sebastien Han
Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. The community and Ceph itself has greatly matured. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance,and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables. The main goal of the talk is to convince those of you who aren't already using Ceph as a storage backend for OpenStack to do so. I consider the Ceph technology to be the de facto storage backend for OpenStack for a lot of good reasons that I'll expose during the talk. Since the Icehouse OpenStack summit, we have been working really hard to improve the Ceph integration. Icehouse is definitely THE big release for OpenStack and Ceph. In this session, Sebastien Han from eNovance will go through several subjects such as: Ceph overview Building a Ceph cluster - general considerations Why is Ceph so good with OpenStack? OpenStack and Ceph: 5 minutes quick start for developers Typical architecture designs State of the integration with OpenStack (icehouse best additions) Juno roadmap and beyond.
Video Presentation: http://bit.ly/1iLwTNf
Presentation held at GRNET Digital Technology Symposium on November 5-6, 2018 at the Stavros Niarchos Foundation Cultural Center, Athens, Greece.
• Introduction to Ceph and its internals
• Presentation of GRNET's Ceph deployments (technical specs, operations)
• Usecases: ESA Copernicus, ~okeanos, ViMa
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
3. 3
Ceph in OpenStack -2014
Source: http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
4. 4
Ceph in OpenStack - 2015
Source:
http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up
5. 5
What is Ceph ?
● Distributed storage system
● Algorithmic placement - CRUSH
● No single point of failure
● Self healing and self managing
● Runs on commodity hardware
– No vendor lockin!
● Opensource
– GPLv2 License
– Community driven
Probably, one of the best examples of
SDS (aka Software Defined Storage)
8. 8
RADOS COMPONENTS
OSDs:
10s to 10000s in a cluster
One per disk (or one per SSD, RAID
group…)
Serve stored objects to clients
Intelligently peer for replication & recovery
Monitors:
Maintain cluster membership and state
Provide consensus for distributed decision-
making
Small, odd number
These do not serve stored objects to
clients
M
11. 11
CRUSH: DYNAMIC DATA PLACEMENT
CRUSH:
Pseudo-random placement algorithm
Fast calculation, no lookup
Repeatable, deterministic
Statistically uniform distribution
Stable mapping
Limited data migration on change
Rule-based configuration
Infrastructure topology aware
Adjustable replication
Weighting
12. 12
ACCESSING A RADOS CLUSTER
APPLICATION
M M
M
RADOS CLUSTER
LIBRADOS
OBJECT
socket
13. 13
L
LIBRADOS: RADOS ACCESS FOR
APPS
LIBRADOS:
Direct access to RADOS for applications
C, C++, Python, PHP, Java, Erlang
Direct access to storage nodes
No HTTP overhead
14. 14
THE RADOS GATEWAY
M M
M
RADOS CLUSTER
RADOSGW
LIBRADOS
socket
RADOSGW
LIBRADOS
APPLICATION APPLICATION
REST
15. 15
RADOSGW MAKES RADOS
WEBBY
RADOSGW:
REST-based object storage proxy
Uses RADOS to store objects
API supports buckets, accounts
Usage accounting for billing
Compatible with S3 and Swift applications
21. 21
Ceph as Unified Storage for OpenStack - Adv.
● No storage silos
– Deploy/Manage 1 cluster with diff. pools
● Create image from volume and vice-versa optimizations
● Nova boot from volume optimizations
● Live migration
● Volume retype/migrate optimizations possible (WIP)
● Cinder Backup optimizations
– Full and differential
● Cinder Volume replication (DR) made efficient via RBD
mirroring (WIP)
22. 22
Ceph & OpenStack Storage- Summary
● Object Storage like Swift
– Ceph RADOSGW as a drop-in replacement for OpenStack Swift
● Block Storage in Cinder
– Ceph RBD pool for storing Cinder volumes
● Ephemeral Storage in Nova
– Ceph RBD pool as backend for Ephemeral storage
– Nova boot from volume
● Image Storage in Glance
– Ceph RBD pool as a glance image store
● Backup target for Cinder-Backup
– Ceph RBD pool as a backup target for Cinder
– Backup / Restore cinder volumes to/from Ceph RBD pool
● File Storage in Manila (upcoming / future)
– CephFS as a backend for Manila FS shares
23. 23
References
● Get the best configuration for your cloud
– Devil is in the details
– http://ceph.com/docs/master/rbd/rbd-openstack
● Ceph and openstack, current integration, roadmap
(Vancouver summit prez)
– http://www.slideshare.net/Red_Hat_Storage/open-
stack-ceph-liberty
24. 24
Questions ?
● Disclaimer
– Most (if not all) content for this prez taken from
slideshare, youtube videos, ceph.com docs & other
publicly available presentations.