Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
ย
The next generation of IT will be built around flexible infrastructures and operational efficiencies, lowering costs and increasing overall business value in the organization.
A hyperconverged infrastructure that's built on Red Hat supported technologies--including Linux, Gluster storage, and oVirt virtualization manager--will run on commodity x86 servers using the performance of local storage, to deliver a cost-effective, modular, highly scalable, and secure hyperconverged solution.
TechDay - Toronto 2016 - Hyperconvergence and OpenNebulaOpenNebula Project
ย
Hyperconvergence integrates compute, storage, networking and virtualization resources from scratch in a commodity hardware box supported by a single vendor. It offers scalability, performance, centralized management, reliability and is software-focused. StorPool is a storage software that can be installed on servers to pool and aggregate the capacity and performance of drives. It provides standard block devices and replicates data across drives and servers for redundancy. StorPool integrates fully with Opennebula to provide a robust hyperconverged infrastructure on commodity hardware using distributed storage.
i. SUSE Enterprise Storage 3 provides iSCSI access to connect remotely to ceph storage over TCP/IP, allowing any iSCSI initiator to access the storage over a network. The iSCSI target driver sits on top of RBD (RADOS block device) to enable this access.
ii. Configuring the lrbd package simplifies setting up an iSCSI gateway to Ceph. Multiple gateways can be configured for high availability using targetcli utility.
iii. Optimizations have been made to the iSCSI gateway to efficiently handle certain SCSI operations like atomic compare and write by offloading work to OSDs to avoid locking on gateway nodes.
PCIe peer-to-peer communication can reduce bottlenecks between high-performance I/O devices like SSDs and networking cards by allowing them to transfer data directly without going through the CPU. PMC is developing an NVM Express NVRAM card using DRAM cache that is accessible via the NVMe block driver or custom character driver, and can achieve almost 1 million 4KB IOPS or 10 million 64B IOPS. The company has set up a test hardware and software environment using PCIe devices connected directly to CPU lanes running Debian Linux with custom kernel patches to demonstrate peer-to-peer capabilities.
This document discusses the use of Docker containers to deploy an OpenStack cloud (Corona). It summarizes the different node types used, including controllers, hypervisors, ONE servers, NFS, and Sunstone. It describes challenges with configuring containers that require host privileges or access to resources like cgroups and devices. Systemd and Supervisord are compared for managing processes. Configuration and managing the Oneadmin token/SSH keys across dynamic nodes is challenging. Overall the document evaluates approaches to deploying OpenStack components in Docker containers for scalability, automation, and manageability.
Suse Enterprise Storage 3 provides iSCSI access to connect to ceph storage remotely over TCP/IP, allowing clients to access ceph storage using the iSCSI protocol. The iSCSI target driver in SES3 provides access to RADOS block devices. This allows any iSCSI initiator to connect to SES3 over the network. SES3 also includes optimizations for iSCSI gateways like offloading operations to object storage devices to reduce locking on gateway nodes.
The document discusses Ceph, an open-source distributed storage system. It provides an overview of Ceph's architecture and components, how it works, and considerations for setting up a Ceph cluster. Key points include: Ceph provides unified block, file and object storage interfaces and can scale exponentially. It uses CRUSH to deterministically map data across a cluster for redundancy. Setup choices like network, storage nodes, disks, caching and placement groups impact performance and must be tuned for the workload.
The document provides recommendations for optimizing an OpenStack cloud environment using Ceph storage. It discusses configuring Glance, Cinder, and Nova to integrate with Ceph, as well as recommendations for the Ceph cluster itself regarding OSDs, journals, networking, and failure domains. Performance was improved by converting image formats to raw, enabling SSD journals, bonding network interfaces, and adjusting scrubbing settings.
Ceph Pacific is a major release of the Ceph distributed storage system scheduled for March 2021. It focuses on five key themes: usability, performance, ecosystem integration, multi-site capabilities, and quality. New features in Pacific include automated upgrades, improved dashboard functionality, snapshot-based CephFS mirroring, per-bucket replication in RGW, and expanded telemetry collection. Looking ahead, the Quincy release will focus on continued improvements in these areas such as resource-aware scheduling in cephadm and multi-site monitoring capabilities.
This document discusses high availability (HA) features in SUSE Linux Enterprise Server 12 SP2, including:
- A policy-driven HA cluster with continuous data replication across nodes and simple setup/installation.
- Key HA concepts like resources, constraints, and STONITH (shoot the other node in the head) fencing mechanisms.
- The new Hawk2 web console for managing HA clusters.
- Support for geo-clustering across data centers with concepts like tickets, boothd, and arbitrators.
- Options for maintenance and standby modes, new Cluster-MD software RAID, DRBD replication, OCFS2 and GFS2 cluster filesystems, and easy HA
The document summarizes updates to CephFS in the Pacific release, including improvements to usability, performance, ecosystem integration, multi-site capabilities, and quality. Key updates include MultiFS now being stable, MDS autoscaling, cephfs-top for performance monitoring, scheduled snapshots, NFS gateway support, feature bits for compatibility checking, and improved testing coverage. Performance improvements include ephemeral pinning, capability management optimizations, and asynchronous operations. Multi-site replication between clusters is now possible with snapshot-based mirroring.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
ย
Quick & Easy Deployment of a Ceph Storage Cluster with SUSE Enterprise Storage
The document discusses deploying a Ceph storage cluster using SUSE Enterprise Storage. It begins with an introduction to Ceph and how it works as a distributed object storage system. It then covers designing Ceph clusters based on workload needs and measuring performance. The document concludes with step-by-step instructions for deploying a basic three node Ceph cluster with monitoring using SUSE Enterprise Storage.
This document summarizes the updates in Ceph Pacific and previews the updates coming in Quincy. Some of the key updates in Pacific include improved usability through more hands-off defaults, distributed tracing in OSDs, and canceling ongoing scrubs. Performance improvements include more efficient PG deletion and msgr2 wire format. Telemetry features were added to collect anonymized crash reports and device health data. For Quincy, some highlights mentioned are using the mclock scheduler by default, new PG autoscaling profiles, and further BlueStore optimizations.
The document summarizes new features and updates in Ceph's RBD block storage component. Key points include: improved live migration support using external data sources; built-in LUKS encryption; up to 3x better small I/O performance; a new persistent write-back cache; snapshot quiesce hooks; kernel messenger v2 and replica read support; and initial RBD support on Windows. Future work planned for Quincy includes encryption-formatted clones, cache improvements, usability enhancements, and expanded ecosystem integration.
Red Hat Gluster Storage, Container Storage and CephFS PlansRed_Hat_Storage
ย
At Red Hat Storage Day New York on 1/19/16, Red Hat's Sayan Saha took attendees through an overview of Red Hat Gluster Storage that included future plans for the product, Red Hat's plans for container storage, and the company's plans for CephFS.
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
ย
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) utilize flash caching to accelerate critical data writes and reads, 2) utilize storage tiering to place performance critical data on flash and less critical data on HDDs, and 3) utilize all-flash storage to accelerate performance when all data is critical or caching/tiering cannot be used. The document then discusses best practices for leveraging NVMe SSDs versus SATA SSDs in Ceph configurations and optimizing Linux settings.
Performance optimization for all flash based on aarch64 v2.0Ceph Community
ย
This document discusses performance optimization techniques for All Flash storage systems based on ARM architecture processors. It provides details on:
- The processor used, which is the Kunpeng920 ARM-based CPU with 32-64 cores at 2.6-3.0GHz, along with its memory and I/O controllers.
- Optimizing performance through both software and hardware techniques, including improving CPU usage, I/O performance, and network performance.
- Specific optimization techniques like data placement to reduce cross-NUMA access, multi-port NIC deployment, using multiple DDR channels, adjusting messaging throttling, and optimizing queue wait times in the object storage daemon (OSD).
- Other
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
ย
Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage.
This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts).
DigitalOcean uses Ceph for block and object storage backing for their cloud services. They operate 37 production Ceph clusters running Nautilus and one on Luminous, storing over 54 PB of data across 21,500 OSDs. They deploy and manage Ceph clusters using Ansible playbooks and containerized Ceph packages, and monitor cluster health using Prometheus and Grafana dashboards. Upgrades can be challenging due to potential issues uncovered and slow performance on HDD backends.
Keeping OpenStack storage trendy with Ceph and containersSage Weil
ย
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Ceph, an open source distributed storage system, has been ported to run natively on Windows Server to provide improved performance over the iSCSI gateway and better integration into the Windows ecosystem. Key components like librbd and librados have been ported to Windows and a new WNBD kernel driver implements fast communication with Linux-based Ceph OSDs. This allows access to RBD block devices and CephFS file systems from Windows with comparable performance to Linux.
In this session, Boyan Krosnov, CPO of StorPool will discuss a private cloud setup with KVM achieving 1M IOPS per hyper-converged (storage+compute) node. We will answer the question: What is the optimum architecture and configuration for performance and efficiency?
Ceph is evolving its network stack to improve performance. It is moving from AsyncMessenger to using RDMA for better scalability and lower latency. RDMA support is now built into Ceph and provides native RDMA using verbs or RDMA-CM. This allows using InfiniBand or RoCE networks with Ceph. Work continues to fully leverage RDMA for features like zero-copy replication and erasure coding offload.
Achieving the Ultimate Performance with KVMDevOps.com
ย
Building and managing a cloud is not an easy task. It needs solid knowledge, proper planning and extensive experience in selecting the proper components and putting them together.
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. Join this webinar to learn how to get the most out of your KVM cloud and how to optimize it for performance.
Join this webinar and learn:
Why performance matters and how to measure it properly?
What are the main components of an efficient new-age cloud?
How to select the right hardware?
How to optimize CPU and memory for ultimate performance?
Which network components work best?
How to tune the storage layer for performance?
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
ย
Ceph can provide storage tiering with different performance levels. It allows combining SSDs, SAS, and SATA disks from multiple nodes into pools to provide tiered storage. Performance testing showed that for reads, Ceph provided good performance across all tiers, while for writes Nvme disks had the best performance compared to SSD, SAS, and SATA disks. FIO, IOmeter, and IOzone were some of the tools used to measure throughput and IOPS.
This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
Best Practices & Performance Tuning - OpenStack Cloud Storage with Ceph - In this presentation, we discuss best practices and performance tuning for OpenStack cloud storage with Ceph to achieve high availability, durability, reliability and scalability at any point of time. Also discuss best practices for failure domain, recovery, rebalancing, backfilling, scrubbing, deep-scrubbing and operations
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. In this talk weโll show you how to get the most out of your KVM cloud and how to optimize it for performance: Youโll understand why performance matters and how to measure it properly. Weโll teach you how to optimize CPU and memory for ultimate performance and how to tune the storage layer for performance. Youโll find out what are the main components of an efficient new-age cloud and which network components work best. In addition, youโll learn how to select the right hardware to achieve unmatched performance for your new-age cloud and applications.
Venko Moyankov is an experienced system administrator and solutions architect at StorPool storage. He has experience with managing large virtualizations, working in telcos, designing and supporting the infrastructure of large enterprises. In the last year, his focus has been in helping companies globally to build the best storage solution according to their needs and projects.
Ceph Pacific is a major release of the Ceph distributed storage system scheduled for March 2021. It focuses on five key themes: usability, performance, ecosystem integration, multi-site capabilities, and quality. New features in Pacific include automated upgrades, improved dashboard functionality, snapshot-based CephFS mirroring, per-bucket replication in RGW, and expanded telemetry collection. Looking ahead, the Quincy release will focus on continued improvements in these areas such as resource-aware scheduling in cephadm and multi-site monitoring capabilities.
This document discusses high availability (HA) features in SUSE Linux Enterprise Server 12 SP2, including:
- A policy-driven HA cluster with continuous data replication across nodes and simple setup/installation.
- Key HA concepts like resources, constraints, and STONITH (shoot the other node in the head) fencing mechanisms.
- The new Hawk2 web console for managing HA clusters.
- Support for geo-clustering across data centers with concepts like tickets, boothd, and arbitrators.
- Options for maintenance and standby modes, new Cluster-MD software RAID, DRBD replication, OCFS2 and GFS2 cluster filesystems, and easy HA
The document summarizes updates to CephFS in the Pacific release, including improvements to usability, performance, ecosystem integration, multi-site capabilities, and quality. Key updates include MultiFS now being stable, MDS autoscaling, cephfs-top for performance monitoring, scheduled snapshots, NFS gateway support, feature bits for compatibility checking, and improved testing coverage. Performance improvements include ephemeral pinning, capability management optimizations, and asynchronous operations. Multi-site replication between clusters is now possible with snapshot-based mirroring.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
ย
Quick & Easy Deployment of a Ceph Storage Cluster with SUSE Enterprise Storage
The document discusses deploying a Ceph storage cluster using SUSE Enterprise Storage. It begins with an introduction to Ceph and how it works as a distributed object storage system. It then covers designing Ceph clusters based on workload needs and measuring performance. The document concludes with step-by-step instructions for deploying a basic three node Ceph cluster with monitoring using SUSE Enterprise Storage.
This document summarizes the updates in Ceph Pacific and previews the updates coming in Quincy. Some of the key updates in Pacific include improved usability through more hands-off defaults, distributed tracing in OSDs, and canceling ongoing scrubs. Performance improvements include more efficient PG deletion and msgr2 wire format. Telemetry features were added to collect anonymized crash reports and device health data. For Quincy, some highlights mentioned are using the mclock scheduler by default, new PG autoscaling profiles, and further BlueStore optimizations.
The document summarizes new features and updates in Ceph's RBD block storage component. Key points include: improved live migration support using external data sources; built-in LUKS encryption; up to 3x better small I/O performance; a new persistent write-back cache; snapshot quiesce hooks; kernel messenger v2 and replica read support; and initial RBD support on Windows. Future work planned for Quincy includes encryption-formatted clones, cache improvements, usability enhancements, and expanded ecosystem integration.
Red Hat Gluster Storage, Container Storage and CephFS PlansRed_Hat_Storage
ย
At Red Hat Storage Day New York on 1/19/16, Red Hat's Sayan Saha took attendees through an overview of Red Hat Gluster Storage that included future plans for the product, Red Hat's plans for container storage, and the company's plans for CephFS.
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
ย
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) utilize flash caching to accelerate critical data writes and reads, 2) utilize storage tiering to place performance critical data on flash and less critical data on HDDs, and 3) utilize all-flash storage to accelerate performance when all data is critical or caching/tiering cannot be used. The document then discusses best practices for leveraging NVMe SSDs versus SATA SSDs in Ceph configurations and optimizing Linux settings.
Performance optimization for all flash based on aarch64 v2.0Ceph Community
ย
This document discusses performance optimization techniques for All Flash storage systems based on ARM architecture processors. It provides details on:
- The processor used, which is the Kunpeng920 ARM-based CPU with 32-64 cores at 2.6-3.0GHz, along with its memory and I/O controllers.
- Optimizing performance through both software and hardware techniques, including improving CPU usage, I/O performance, and network performance.
- Specific optimization techniques like data placement to reduce cross-NUMA access, multi-port NIC deployment, using multiple DDR channels, adjusting messaging throttling, and optimizing queue wait times in the object storage daemon (OSD).
- Other
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
ย
Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage.
This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts).
DigitalOcean uses Ceph for block and object storage backing for their cloud services. They operate 37 production Ceph clusters running Nautilus and one on Luminous, storing over 54 PB of data across 21,500 OSDs. They deploy and manage Ceph clusters using Ansible playbooks and containerized Ceph packages, and monitor cluster health using Prometheus and Grafana dashboards. Upgrades can be challenging due to potential issues uncovered and slow performance on HDD backends.
Keeping OpenStack storage trendy with Ceph and containersSage Weil
ย
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Ceph, an open source distributed storage system, has been ported to run natively on Windows Server to provide improved performance over the iSCSI gateway and better integration into the Windows ecosystem. Key components like librbd and librados have been ported to Windows and a new WNBD kernel driver implements fast communication with Linux-based Ceph OSDs. This allows access to RBD block devices and CephFS file systems from Windows with comparable performance to Linux.
In this session, Boyan Krosnov, CPO of StorPool will discuss a private cloud setup with KVM achieving 1M IOPS per hyper-converged (storage+compute) node. We will answer the question: What is the optimum architecture and configuration for performance and efficiency?
Ceph is evolving its network stack to improve performance. It is moving from AsyncMessenger to using RDMA for better scalability and lower latency. RDMA support is now built into Ceph and provides native RDMA using verbs or RDMA-CM. This allows using InfiniBand or RoCE networks with Ceph. Work continues to fully leverage RDMA for features like zero-copy replication and erasure coding offload.
Achieving the Ultimate Performance with KVMDevOps.com
ย
Building and managing a cloud is not an easy task. It needs solid knowledge, proper planning and extensive experience in selecting the proper components and putting them together.
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. Join this webinar to learn how to get the most out of your KVM cloud and how to optimize it for performance.
Join this webinar and learn:
Why performance matters and how to measure it properly?
What are the main components of an efficient new-age cloud?
How to select the right hardware?
How to optimize CPU and memory for ultimate performance?
Which network components work best?
How to tune the storage layer for performance?
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
ย
Ceph can provide storage tiering with different performance levels. It allows combining SSDs, SAS, and SATA disks from multiple nodes into pools to provide tiered storage. Performance testing showed that for reads, Ceph provided good performance across all tiers, while for writes Nvme disks had the best performance compared to SSD, SAS, and SATA disks. FIO, IOmeter, and IOzone were some of the tools used to measure throughput and IOPS.
This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
Best Practices & Performance Tuning - OpenStack Cloud Storage with Ceph - In this presentation, we discuss best practices and performance tuning for OpenStack cloud storage with Ceph to achieve high availability, durability, reliability and scalability at any point of time. Also discuss best practices for failure domain, recovery, rebalancing, backfilling, scrubbing, deep-scrubbing and operations
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. In this talk weโll show you how to get the most out of your KVM cloud and how to optimize it for performance: Youโll understand why performance matters and how to measure it properly. Weโll teach you how to optimize CPU and memory for ultimate performance and how to tune the storage layer for performance. Youโll find out what are the main components of an efficient new-age cloud and which network components work best. In addition, youโll learn how to select the right hardware to achieve unmatched performance for your new-age cloud and applications.
Venko Moyankov is an experienced system administrator and solutions architect at StorPool storage. He has experience with managing large virtualizations, working in telcos, designing and supporting the infrastructure of large enterprises. In the last year, his focus has been in helping companies globally to build the best storage solution according to their needs and projects.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
ย
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
ย
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
Planning & Best Practice for Microsoft VirtualizationLai Yoong Seng
ย
This document provides best practices for planning and configuring Microsoft virtualization. It discusses guidelines for Hyper-V hosts, virtual machines, SQL Server, Active Directory, Exchange Server, and SharePoint. It recommends using tools like MAP for assessment and following Microsoft's support policies. Key guidelines include processor and memory allocation, storage configuration, network optimization, and redundancy. The document aims to help users understand and apply Microsoft's virtualization best practices.
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
ย
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) use all flash storage for highest performance, 2) use a hybrid configuration with performance critical data on flash tier and colder data on HDD tier, or 3) utilize host caching of critical data on flash. Benchmark results showed that using NVMe SSDs in Ceph provided much higher performance than SATA SSDs, with speed increases of up to 8x for some workloads. However, testing also showed that Ceph may not be well-suited for OLTP MySQL workloads due to small random reads/writes, as local SSD storage outperformed the Ceph cluster. Proper Linux tuning is also needed to maximize SSD performance within
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
ย
The document discusses optimizing Ceph storage performance on QCT servers using NUMA-balanced hardware and tuning. It provides details on QCT hardware configurations for throughput, capacity and IOPS-optimized Ceph storage. It also describes testing done in QCT labs using a 5-node all-NVMe Ceph cluster that showed significant performance gains from software tuning and using multiple OSD partitions per SSD.
Ceph Day London 2014 - Deploying ceph in the wildCeph Community
ย
The document discusses Wido den Hollander's experience deploying Ceph storage clusters in various organizations. It describes two example deployments: one using Ceph block storage with CloudStack for 1000 VMs and S3 object storage, using SSDs and HDDs optimized for IOPS; the other using Ceph block storage with OCFS2 shared filesystem between web servers, using all SSDs and 10GbE networking for low latency. The document emphasizes designing for high IOPS rather than just capacity, and discusses best practices like regular hardware updates and testing recovery scenarios.
The document provides information about virtual machine extensions (VMX) on Juniper Networks routers. It discusses hardware virtualization concepts including guest virtual machines running on a host machine. It then describes the different types of virtualization including fully virtualized, para-virtualized, and hardware-assisted. The rest of the document goes into details about the VMX product, architecture, forwarding model, and performance considerations for different use cases.
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
ย
The document summarizes a presentation given by representatives from various companies on optimizing Ceph for high-performance solid state drives. It discusses testing a real workload on a Ceph cluster with 50 SSD nodes that achieved over 280,000 read and write IOPS. Areas for further optimization were identified, such as reducing latency spikes and improving single-threaded performance. Various companies then described their contributions to Ceph performance, such as Intel providing hardware for testing and Samsung discussing SSD interface improvements.
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
ย
This document discusses QCT's Ceph storage solutions, including an overview of Ceph architecture, QCT hardware platforms, Red Hat Ceph software, workload considerations, reference architectures, test results and a QCT/Red Hat whitepaper. It provides technical details on QCT's throughput-optimized and capacity-optimized solutions and shows how they address different storage needs through workload-driven design. Hands-on testing and a test drive lab are offered to explore Ceph features and configurations.
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
ย
This document discusses QCT's Ceph storage solutions, including an overview of Ceph architecture, QCT hardware platforms, Red Hat Ceph software, workload considerations, benchmark testing results, and a collaboration between QCT, Red Hat, and Intel to provide optimized and validated Ceph solutions. Key reference architectures are presented targeting small, medium, and large storage capacities with options for throughput, capacity, or IOPS optimization.
Design Considerations, Installation, and Commissioning of the RedRaider Cluster at the Texas Tech University
High Performance Computing Center
Outline of this talk
HPCC Staff and Students
Previous clusters
โข History, Performance, usage Patterns, and Experience
Motivation for Upgrades
โข Compute Capacity Goals
โข Related Considerations
Installation and Benchmarks Conclusions and Q&A
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
ย
Ceph is a mature open source software-defined storage solution that was created over a decade ago.
During that time new faster storage technologies have emerged including NVMe and Persistent memory.
The crimson project aim is to create a better Ceph OSD that is more well suited to those faster devices. The crimson OSD is built on the Seastar C++ framework and can leverage these devices by minimizing latency, cpu overhead, and cross-core communication. This talk will discuss the project design, our current status, and our future plans.
This document summarizes OpenStack Compute features related to the Libvirt/KVM driver, including updates in Kilo and predictions for Liberty. Key Kilo features discussed include CPU pinning for performance, huge page support, and I/O-based NUMA scheduling. Predictions for Liberty include improved hardware policy configuration, post-plug networking scripts, further SR-IOV support, and hot resize capability. The document provides examples of how these features can be configured and their impact on guest virtual machine configuration and performance.
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
ย
This document discusses Supermicro's evolution from server and storage innovation to total solutions innovation. It provides examples of their all-flash storage servers and Red Hat Ceph testing results. Finally, it outlines their approach to providing optimized, turnkey storage solutions based on workload requirements and best practices learned from customer deployments and testing.
Azure VM 101 - HomeGen by CloudGen Verona - Marco ObinuMarco Obinu
ย
Slides presented during HomeGen by CloudGen Verona, about how to properly size an Azure IaaS VM, with an additional focus on high availability and cost-saving topics.
Session recording: https://youtu.be/C8v6c6EkJ9A
Demo: https://github.com/OmegaMadLab/SqlIaasVmPlayground
How Can I use the AI Hype in my Business Context?Daniel Lehner
ย
๐๐จ ๐ผ๐ ๐๐ช๐จ๐ฉ ๐๐ฎ๐ฅ๐? ๐๐ง ๐๐จ ๐๐ฉ ๐ฉ๐๐ ๐๐๐ข๐ ๐๐๐๐ฃ๐๐๐ง ๐ฎ๐ค๐ช๐ง ๐๐ช๐จ๐๐ฃ๐๐จ๐จ ๐ฃ๐๐๐๐จ?
Everyoneโs talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know ๐ต๐ผ๐.
โ What exactly should you ask to find real AI opportunities?
โ Which AI techniques actually fit your business?
โ Is your data even ready for AI?
If youโre not sure, youโre not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
TrsLabs - Fintech Product & Business ConsultingTrs Labs
ย
Hybrid Growth Mandate Model with TrsLabs
Strategic Investments, Inorganic Growth, Business Model Pivoting are critical activities that business don't do/change everyday. In cases like this, it may benefit your business to choose a temporary external consultant.
An unbiased plan driven by clearcut deliverables, market dynamics and without the influence of your internal office equations empower business leaders to make right choices.
Getting things done within a budget within a timeframe is key to Growing Business - No matter whether you are a start-up or a big company
Talk to us & Unlock the competitive advantage
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
ย
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
๐ Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
๐ Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
AI Changes Everything โ Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
ย
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
ย
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
ย
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
AI and Data Privacy in 2025: Global TrendsInData Labs
ย
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the todayโs world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
ย
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
ย
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, โThe Coding War Games.โ
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we donโt find ourselves having the same discussion again in a decade?
Procurement Insights Cost To Value Guide.pptxJon Hansen
ย
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement โ not a competitor โ to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
ย
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
ย
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
ย
Weโre bringing the TDX energy to our community with 2 power-packed sessions:
๐ ๏ธ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
๐ Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. ๐
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! ๐
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
ย
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
Linux Professional Institute LPIC-1 Exam.pdfRHCSA Guru
ย
Ad
Red hat open stack and storage presentation
1. Red Hat OpenStack Platform and
Red Hat Ceph
Mayur Shetty - Sr. Solution Architect
Global Partners & Alliances, Red Hat
2. Agenda
โ Introduction to Red Hat OpenStack Platform director
โ Undercloud
โ Overcloud
โ High Availability
โ Ceph Storage
โ Requirements - Environment, Undercloud, Network, and Overcloud
โ Red Hat Storage - Ceph
6. Overcloud
โ Controller
โ Provides administrative, networking, and HA for OpenStack
โ Has OpenStack services, MariaDB, and Open vSwitch
โ Pacemaker and Galera for HA services
โ Ceph Monitors
โ Compute
โ Nova service, KVM/QEMU, Ceilometer agent, and Open vSwitch
โ Ceph Storage
โ Each node contains the Ceph Object Storage Daemon(OSD)
โ Block, Object, and File Storage
โ Storage layer scales like the OpenStack infrastructure layer.
9. Environment Requirement
NOTE:
โ Recommended to use bare metals for all the nodes.
โ All overcloud bare metal require Intelligent Platform Management Interface (IPMI).
Node Type Minimum Recommended
Director 1 1
Compute 1 3
Controller 1 3
Ceph 3
10. Undercloud Requirement
Minimum requirement
CPU 8 core 64-bit x86 processor
for Intel 64 or AMD64 CPU
extensions
Memory 16 GB of RAM
Disk 40 GB (root disk)
NICs 2 x 1 Gbps NICs.
Recommended 10 Gbps for
Provisioning
Host OS RHEL 7.3
SELinux Enabled
11. Network Requirement - Undercloud
Network Purpose
Provisioning Network Provides DHCP and PXE
boot functions
External Network For remote connectivity
12. Network Requirement - Overcloud
NIC configs Purpose
Single NIC configuration One NIC for both
Provisioning and other
Overcloud network types
Dual NIC configuration One NIC for Provisioning,
and other for the other
Overcloud network types
Multiple NICs Each NIC uses a subnet of
a different Overcloud
network types
NOTE:
โ NIC bonding for High Availability
โ Set all nodes to PXE boot off the Provisioning NIC, and disable PXE boot on the other NICs.
13. Overcloud Requirement - Compute
Minimum requirement Recommended
CPU 4 core 64-bit x86
processor for Intel 64 or
AMD64 CPU extensions.
AMD-V and Intel VT
virtualization enabled
Memory 6 GB of RAM Additional RAM depends
on the VMs running.
Disk 40 GB
NICs 1 x Gbps NICs. Recommended 2 (or
more) x 10 Gbps
14. Overcloud Requirement - Controller
Minimum
requirement
Recommended
CPU 64-bit x86 processor
for Intel 64 or AMD64
CPU extensions.
Memory 32 GB of RAM 3 GB per vCPU
eg: 3 x 48 vCPUs =
144 GB
Disk 40 GB or more
NICs 2 x 1 Gbps NICs. Recommended 2 (or
more) x 10 Gbps
15. Overcloud Requirement - Ceph
Minimum requirement Recommended
CPU 64-bit x86 processor
for Intel 64 or AMD64
CPU extensions.
Memory 1 GB per 1 TB of hard
disk space
Disk 3 or more disks
NICs 1 x 1 Gbps NICs. 2 or more 10Gbps
NICs
19. Red Hat Ceph Storage: Workloads and Use cases
SMALL MEDIUM LARGE
250 TB+ 1 PB+ 2 PB+
20. Red Hat Ceph Storage: IOPS-OPTIMIZED SOLUTION
CPU 10 cores per NVMe SSD, assuming a 2GHz CPU
RAM 16GB baseline, plus 2 GB per OSD
Networking 10 GbE per 12 OSD (each for client and cluster facing networks)
OSD media High performance, high endurance enterprise NVMe SSDs
OSDs Four per NVMe SSD
Journal media High-performance, high endurance enterprise NVMe SSD,
co-located with OSDs
Controller Native PCIe bus
21. Red Hat Ceph Storage: THROUGHPUT-OPTIMIZED SOLUTION
CPU 0.5 cores per HDD, assuming a 2GHz CPU
RAM 16GB baseline, plus 2 GB per OSD
Networking 10 GbE per 12 OSDs (each for client and cluster facing networks)
OSD media 7,200 RPM enterprise HDDs
OSDs One per HDD
Journal media High-endurance, high-performance enterprise serial-attached SCSI (SAS) or NVMe
SSDs
OSD-to-journal ratio 4-5:1 for an SSD journal, or 12-18:1 for an NVMe journal
Host Bus Adapter
(HBA)
Just a bunch of disks (JBODS)
22. Red Hat Ceph Storage: COST/CAPACITY-OPTIMIZED SOLUTION
CPU 0.5 cores per HDD, assuming a 2GHz CPU
RAM 16GB baseline, plus 2 GB per OSD
Networking 10 GbE per 12 OSDs (each for client and cluster facing networks)
OSD media 7,200 RPM enterprise HDDs
OSDs One per HDD
Journal media Co-located on HDD
Host Bus Adapter
(HBA)
Just a bunch of disks (JBODS)