At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
CephFS performance testing was conducted on a Jewel deployment. Key findings include:
- Single MDS performance is limited by its single-threaded design; operations reached CPU limits
- Improper client behavior can cause MDS OOM issues by exceeding inode caching limits
- Metadata operations like create, open, update showed similar performance, reaching 4-5k ops/sec maximum
- Caching had a large impact on performance when the working set exceeded cache size
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
This document discusses building a high-performance and durable block storage service using Ceph. It describes the architecture, including a minimum deployment of 12 OSD nodes and 3 monitor nodes. It outlines optimizations made to Ceph, Qemu, and the operating system configuration to achieve high performance, including 6000 IOPS and 170MB/s throughput. It also discusses how the CRUSH map can be optimized to reduce recovery times and number of copysets to improve durability to 99.99999999%.
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
This document discusses quick deployment of a Ceph storage cluster using SUSE Linux Enterprise Server (SLES). It provides an overview of Ceph and its components, and steps for provisioning a Ceph cluster including bootstrapping an initial monitor, adding OSDs, and configuring a PXE boot server for automated installation. It also briefly introduces tools like SUSE Studio for appliance building and SUSE Manager for systems management that can aid in deploying and managing the Ceph cluster.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Vijayendra Shamanna from SanDisk presented on optimizing the Ceph distributed storage system for all-flash architectures. Some key points:
1) Ceph is an open-source distributed storage system that provides file, block, and object storage interfaces. It operates by spreading data across multiple commodity servers and disks for high performance and reliability.
2) SanDisk has optimized various aspects of Ceph's software architecture and components like the messenger layer, OSD request processing, and filestore to improve performance on all-flash systems.
3) Testing showed the optimized Ceph configuration delivering over 200,000 IOPS and low latency with random 8K reads on an all-flash setup.
Ceph - High Performance Without High CostsJonathan Long
Ceph is a high-performance storage platform that provides storage without high costs. The presentation discusses BlueStore, a redesign of Ceph's object store to improve performance and efficiency. BlueStore preserves wire compatibility but uses an incompatible storage format. It aims to double write performance and match or exceed read performance of the previous FileStore design. BlueStore simplifies the architecture and uses algorithms tailored for different hardware like flash. It was in a tech preview in the Jewel release and aims to be default in the Luminous release next year.
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
Cisco Cloud Services provides an OpenStack platform to Cisco SaaS applications using a worldwide deployment of Ceph clusters storing petabytes of data. The initial Ceph cluster design experienced major stability problems as the cluster grew past 50% capacity. Strategies were implemented to improve stability including client IO throttling, backfill and recovery throttling, upgrading Ceph versions, adding NVMe journals, moving the MON levelDB to SSDs, rebalancing the cluster, and proactively detecting slow disks. Lessons learned included the importance of devops practices, sharing knowledge, rigorous testing, and balancing performance, cost and time.
The document discusses strategies for optimizing Ceph performance at scale. It describes the presenters' typical node configurations, including storage nodes with 72 HDDs and NVME journals, and monitor/RGW nodes. Various techniques are discussed like ensuring proper NUMA alignment of processes, IRQs, and mount points. General tuning tips include using latest drivers, OS tuning, and addressing network issues. The document stresses that monitors can become overloaded during large rebalances and deleting large pools, so more than one monitor is needed for large clusters.
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) use all flash storage for highest performance, 2) use a hybrid configuration with performance critical data on flash tier and colder data on HDD tier, or 3) utilize host caching of critical data on flash. Benchmark results showed that using NVMe SSDs in Ceph provided much higher performance than SATA SSDs, with speed increases of up to 8x for some workloads. However, testing also showed that Ceph may not be well-suited for OLTP MySQL workloads due to small random reads/writes, as local SSD storage outperformed the Ceph cluster. Proper Linux tuning is also needed to maximize SSD performance within
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
This document provides an overview and summary of Red Hat Storage and Inktank Ceph. It discusses Red Hat acquiring Inktank Ceph in April 2014 and the future of Red Hat Storage having two flavors - Gluster edition and Ceph edition. Key features of Red Hat Storage 3.0 include enhanced data protection with snapshots, cluster monitoring, and deep Hadoop integration. The document also introduces Inktank Ceph Enterprise v1.2 and discusses Ceph components like RADOS, LIBRADOS, RBD, RGW and how Ceph can be used with OpenStack.
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
This document discusses QCT's Ceph storage solutions, including an overview of Ceph architecture, QCT hardware platforms, Red Hat Ceph software, workload considerations, reference architectures, test results and a QCT/Red Hat whitepaper. It provides technical details on QCT's throughput-optimized and capacity-optimized solutions and shows how they address different storage needs through workload-driven design. Hands-on testing and a test drive lab are offered to explore Ceph features and configurations.
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13Gosuke Miyashita
The document discusses Gosuke Miyashita's goal of building a scalable storage system for his company's web hosting service. He is exploring the use of several open source technologies including cman, CLVM, GFS2, GNBD, DRBD, and DM-MP to create a storage system that provides high availability, flexible I/O distribution, and easy extensibility without expensive hardware. He outlines how each technology works and shows some example configurations, but notes that integrating many components may introduce issues around complexity, overhead, performance, stability and compatibility with non-Red Hat Linux.
This document outlines a course on Ceph storage. It covers Ceph history and components. Key components discussed include Object Storage Daemons (OSDs) that store data, Monitors that maintain the cluster map and provide consensus, and the Ceph journal. Other topics are the Ceph Gateway for object storage access, Ceph Block Device (RBD) for block storage, and CephFS for file storage. The goal is to understand Ceph concepts and deploy a Ceph cluster.
This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage functionality. It uses a technique called CRUSH to automatically distribute data across clusters of commodity servers and provide fault tolerance. Ceph block storage (RBD) can be used as reliable virtual disk images for virtual machines and containers, enabling features like live migration. RBD integration is currently being improved for better performance and compatibility with virtualization platforms like Xen and OpenStack.
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
This document outlines an agenda for a presentation on running MySQL on Ceph storage. It includes a comparison of MySQL on Ceph versus AWS, results from a head-to-head performance lab test between the two platforms, and considerations for hardware architectures and configurations optimized for MySQL workloads on Ceph. The lab tests showed that Ceph could match or exceed AWS on both performance metrics like IOPS/GB and price/performance metrics like storage cost per IOP.
This document summarizes a distributed storage system called Ceph. Ceph uses an architecture with four main components - RADOS for reliable storage, Librados client libraries, RBD for block storage, and CephFS for file storage. It distributes data across intelligent storage nodes using the CRUSH algorithm and maintains reliability through replication and erasure coding of placement groups across the nodes. The monitors manage the cluster map and placement, while OSDs on each node store and manage the data and metadata.
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
This document discusses how data growth driven by mobile, social media, IoT, and big data/cloud is requiring a fundamental shift in storage cost structures from scale-up to scale-out architectures. It provides an overview of key storage technologies and workloads driving public cloud storage, and how Ceph can help deliver on the promise of the cloud by providing next generation storage architectures with flash to enable new capabilities in small footprints. It also illustrates the wide performance range Ceph can provide for different workloads and hardware configurations.
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
This document discusses the need for storage modernization driven by trends like mobile, social media, IoT and big data. It outlines how scale-out architectures using open source Ceph software can help meet this need more cost effectively than traditional scale-up storage. Specific optimizations for IOPS, throughput and capacity are described. Intel is presented as helping advance the industry through open source contributions and optimized platforms, software and SSD technologies. Real-world examples are given showing the wide performance range Ceph can provide.
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Community
The document discusses trends in data growth and storage technologies that are driving the need for storage modernization. It outlines Intel's role in advancing the storage industry through open source technologies and standards. Specifically, it focuses on Intel's work optimizing Ceph for Intel platforms, including performance profiling, enabling Intel optimized solutions, and end customer proofs-of-concept using Ceph with Intel SSDs, Optane, and platforms.
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
Cisco Cloud Services provides an OpenStack platform to Cisco SaaS applications using a worldwide deployment of Ceph clusters storing petabytes of data. The initial Ceph cluster design experienced major stability problems as the cluster grew past 50% capacity. Strategies were implemented to improve stability including client IO throttling, backfill and recovery throttling, upgrading Ceph versions, adding NVMe journals, moving the MON levelDB to SSDs, rebalancing the cluster, and proactively detecting slow disks. Lessons learned included the importance of devops practices, sharing knowledge, rigorous testing, and balancing performance, cost and time.
The document discusses strategies for optimizing Ceph performance at scale. It describes the presenters' typical node configurations, including storage nodes with 72 HDDs and NVME journals, and monitor/RGW nodes. Various techniques are discussed like ensuring proper NUMA alignment of processes, IRQs, and mount points. General tuning tips include using latest drivers, OS tuning, and addressing network issues. The document stresses that monitors can become overloaded during large rebalances and deleting large pools, so more than one monitor is needed for large clusters.
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) use all flash storage for highest performance, 2) use a hybrid configuration with performance critical data on flash tier and colder data on HDD tier, or 3) utilize host caching of critical data on flash. Benchmark results showed that using NVMe SSDs in Ceph provided much higher performance than SATA SSDs, with speed increases of up to 8x for some workloads. However, testing also showed that Ceph may not be well-suited for OLTP MySQL workloads due to small random reads/writes, as local SSD storage outperformed the Ceph cluster. Proper Linux tuning is also needed to maximize SSD performance within
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
This document provides an overview and summary of Red Hat Storage and Inktank Ceph. It discusses Red Hat acquiring Inktank Ceph in April 2014 and the future of Red Hat Storage having two flavors - Gluster edition and Ceph edition. Key features of Red Hat Storage 3.0 include enhanced data protection with snapshots, cluster monitoring, and deep Hadoop integration. The document also introduces Inktank Ceph Enterprise v1.2 and discusses Ceph components like RADOS, LIBRADOS, RBD, RGW and how Ceph can be used with OpenStack.
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
This document discusses QCT's Ceph storage solutions, including an overview of Ceph architecture, QCT hardware platforms, Red Hat Ceph software, workload considerations, reference architectures, test results and a QCT/Red Hat whitepaper. It provides technical details on QCT's throughput-optimized and capacity-optimized solutions and shows how they address different storage needs through workload-driven design. Hands-on testing and a test drive lab are offered to explore Ceph features and configurations.
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13Gosuke Miyashita
The document discusses Gosuke Miyashita's goal of building a scalable storage system for his company's web hosting service. He is exploring the use of several open source technologies including cman, CLVM, GFS2, GNBD, DRBD, and DM-MP to create a storage system that provides high availability, flexible I/O distribution, and easy extensibility without expensive hardware. He outlines how each technology works and shows some example configurations, but notes that integrating many components may introduce issues around complexity, overhead, performance, stability and compatibility with non-Red Hat Linux.
This document outlines a course on Ceph storage. It covers Ceph history and components. Key components discussed include Object Storage Daemons (OSDs) that store data, Monitors that maintain the cluster map and provide consensus, and the Ceph journal. Other topics are the Ceph Gateway for object storage access, Ceph Block Device (RBD) for block storage, and CephFS for file storage. The goal is to understand Ceph concepts and deploy a Ceph cluster.
This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage functionality. It uses a technique called CRUSH to automatically distribute data across clusters of commodity servers and provide fault tolerance. Ceph block storage (RBD) can be used as reliable virtual disk images for virtual machines and containers, enabling features like live migration. RBD integration is currently being improved for better performance and compatibility with virtualization platforms like Xen and OpenStack.
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
This document outlines an agenda for a presentation on running MySQL on Ceph storage. It includes a comparison of MySQL on Ceph versus AWS, results from a head-to-head performance lab test between the two platforms, and considerations for hardware architectures and configurations optimized for MySQL workloads on Ceph. The lab tests showed that Ceph could match or exceed AWS on both performance metrics like IOPS/GB and price/performance metrics like storage cost per IOP.
This document summarizes a distributed storage system called Ceph. Ceph uses an architecture with four main components - RADOS for reliable storage, Librados client libraries, RBD for block storage, and CephFS for file storage. It distributes data across intelligent storage nodes using the CRUSH algorithm and maintains reliability through replication and erasure coding of placement groups across the nodes. The monitors manage the cluster map and placement, while OSDs on each node store and manage the data and metadata.
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
This document discusses how data growth driven by mobile, social media, IoT, and big data/cloud is requiring a fundamental shift in storage cost structures from scale-up to scale-out architectures. It provides an overview of key storage technologies and workloads driving public cloud storage, and how Ceph can help deliver on the promise of the cloud by providing next generation storage architectures with flash to enable new capabilities in small footprints. It also illustrates the wide performance range Ceph can provide for different workloads and hardware configurations.
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
This document discusses the need for storage modernization driven by trends like mobile, social media, IoT and big data. It outlines how scale-out architectures using open source Ceph software can help meet this need more cost effectively than traditional scale-up storage. Specific optimizations for IOPS, throughput and capacity are described. Intel is presented as helping advance the industry through open source contributions and optimized platforms, software and SSD technologies. Real-world examples are given showing the wide performance range Ceph can provide.
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Community
The document discusses trends in data growth and storage technologies that are driving the need for storage modernization. It outlines Intel's role in advancing the storage industry through open source technologies and standards. Specifically, it focuses on Intel's work optimizing Ceph for Intel platforms, including performance profiling, enabling Intel optimized solutions, and end customer proofs-of-concept using Ceph with Intel SSDs, Optane, and platforms.
Ceph Day Beijing - Storage Modernization with Intel and CephDanielle Womboldt
The document discusses trends in data growth and storage technologies that are driving the need for storage modernization. It outlines Intel's role in advancing the storage industry through open source technologies and standards. A significant portion of the document focuses on Intel's work optimizing Ceph for Intel platforms, including profiling and benchmarking Ceph performance on Intel SSDs, 3D XPoint, and Optane drives.
In-Memory and TimeSeries Technology to Accelerate NoSQL Analyticssandor szabo
The ability of Informix to combine the in
-
memor
y
performance of Informix Warehouse Accelerator
and the flexibility of TimeSeries and NoSQL
analytics positions it to be ready for the IoT era.
1. The document introduces the Intel Xeon Scalable platform, which provides the foundation for data center innovation with a 1.65x average performance boost over previous generations.
2. It highlights key advantages of the platform including scalable performance, agility in rapid service delivery, and hardware-enhanced security with near-zero performance overhead.
3. Various workload-optimized solutions are discussed that leverage the platform's performance to accelerate insights from analytics, deploy cloud infrastructure more quickly, and transform networks.
The document discusses accelerating Ceph storage performance using SPDK. SPDK introduces optimizations like asynchronous APIs, userspace I/O stacks, and polling mode drivers to reduce software overhead and better utilize fast storage devices. This allows Ceph to better support high performance networks and storage like NVMe SSDs. The document provides an example where SPDK helped XSKY's BlueStore object store achieve significant performance gains over the standard Ceph implementation.
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu India
Fujitsu has decades of experience designing and manufacturing servers. Their PRIMERGY servers are known for best-in-class quality that ensures continuous operation with almost no unplanned downtimes. This is achieved through rigorous testing and manufacturing processes in their state-of-the-art factories in Germany. Fujitsu's demand-driven manufacturing approach allows them to produce servers flexibly based on current orders, enabling fast response times and fulfilling individual customer requests.
Join us for an exciting and informative preview of the broadest range of next-generation systems optimized for tomorrow’s data center workloads, Powered by 4th Gen Intel® Xeon® Scalable Processors (formerly codenamed Sapphire Rapids).
Experts from Supermicro and Intel will discuss how the upcoming Supermicro X13 systems will enable new performance levels utilizing state-of-the-art technology, including DDR5, PCIe 5.0, Compute Express Link™ 1.1, and Intel® Advanced Matrix Extensions (Intel AMX).
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
In this deck, Johann Lombardi from Intel presents: DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence.
"Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel Optane DC persistent memory and Intel Optane DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI."
Unlike traditional storage stacks that were primarily designed for rotating media, DAOS is architected from the ground up to make use of new NVM technologies, and it is extremely lightweight because it operates end-to-end in user space with full operating system bypass. DAOS offers a shift away from an I/O model designed for block-based, high-latency storage to one that inherently supports fine- grained data access and unlocks the performance of next- generation storage technologies.
Watch the video: https://youtu.be/wnGBW31yhLM
Learn more: https://www.intel.com/content/www/us/en/high-performance-computing/daos-high-performance-storage-brief.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013Jaroslav Prodelal
Dokážete si představit, že byste provozovali své datacentrum v prostředí kanceláře? Ano, je to možné. Společnost Dell uvedla na trh novinku v podobě tzv. datacenter-in-a-box (vše-v-jednom), které je optimalizované (odhlučnění, napájení) pro provoz i v kanceláři, samozřejmě jej můžete dát i do samostatné místnosti.
Dell VRTX kombinuje v jediném 5U šasí výpočetní výkon (až 4 2-CPU servery), diskové úložiště (až 24 HDD) a síť.
Ve webináři vás seznámíme s touto cenově velmi zajímavou novinkou a ukážeme rozdíl mezi tímto řešením a případnými alternativami v době samostaných serverů, diskového pole a síťových switchů.
Agenda:
* co je Dell VRTX?
* segment zákazníků pro VRTX
* co VRTX nabízí
* řešení provozované na VRTX
* technické specifikace
* možná použití
* cena
* aktuální nabídky a promo akce
The document discusses reimagining the datacenter through software defined infrastructure. This allows datacenters to become more dynamic, automated and efficient by treating compute, storage and networking resources as composable blocks that can be allocated on demand. This approach breaks down traditional silos and allows simpler deployment and maintenance while improving agility, automation and efficiency. The software defined approach is compared to the traditional rigid infrastructure model and examples are given of how it can improve provisioning times, utilization rates and flexibility.
Accelerating Mission Critical Transformation at Red Hat Summit 2011Pauline Nist
This document discusses accelerating mission critical workloads by migrating them from legacy and proprietary systems to open standard x86 platforms based on Intel Xeon processors. It provides an overview of how Intel is enabling this transition through improved performance, reliability, and security features in recent Xeon generations, as well as growing ecosystem support. Analyst reports and customer quotes are presented showing the migration of mission critical workloads from RISC/UNIX platforms to Xeon, driven by lower costs and comparable capabilities.
Intel's Data Center & Connected Systems Group and Diane Bryant shares the latest news on the latest Intel Xeon E5v2 family of processors and technologies like Intel Network Builders to enable the re-architecture of the Data Center.
Noile solutii Intel pentru afaceri eficiente-tm-20mai2010Agora Group
This document discusses Intel technologies for efficient IT infrastructures and provides the following key points:
1. Intel has fabrication facilities around the world producing microchips on advanced technology nodes.
2. Intel's Core processor family delivers scalable performance across devices from netbooks to servers through Intel architecture.
3. Intel's new 2010 Core processors feature technologies like Turbo Boost for intelligent performance and integrated graphics.
Noile tehnologii INTEL pentru infrastructuri IT eficiente-19mar2010Agora Group
This document discusses Intel technologies for efficient IT infrastructures and provides the following key points:
1. Intel has fabrication facilities around the world producing microchips on advanced technology nodes.
2. Intel's Core processor family delivers scalable performance across devices from netbooks to servers through Intel architecture.
3. Intel's new 2010 Core processors feature technologies like Turbo Boost for intelligent performance and integrated graphics.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
3. is Changing
Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014
Fromnow until 2020, the size ofthe digital
universe will aboutdouble everytwoyears
Information Growth*
2X
What we do with data ischanging, traditional
storage infrastructuredoesnot solve
tomorrow’sproblems
Complexity
ShiftingofIT servicesto cloud computing
and next-generationplatforms
Cloud
Emergence offlash storage and
software-defined environments
New Technologies
The World
4. Explosion
EVERY MINUTE EVERY DAY*
300 HOURS
VIDEO UPLOADED
TO YOUTUBE
51,000
APPS DOWNLOADED
204
MILLION E-MAILS
Source: TechSpartan.co.uk - 2013 vs 2015 In an Internet minute
48 HOURS
VIDEO UPLOADED
TO YOUTUBE
47,000
APPS DOWNLOADED
200
MILLIONE-MAILS
2013 2015
Information
5. TheImpactof
theCloud
Empowerment of the end-user
through cloud services
Emergence of new technologies
and architectures
Shifting the role of information
technology professionals
11. WhereDataisCreated
0
10
20
30
40
50
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
(ZB)
Unstructured Data Structured Data
Sources: IDC, 2011 Worldwide Enterprise Storage Systems 2011-2015 Forecast update
IDC, The Digital Universe Study In 2020 Forecasts
4.4ZB
44ZB
90%
10%
0%
20%
40%
60%
80%
100%
2012 2013 2014 2015 2016 2017 2018 2019 2020
% of Total Digital Universe
Emerging Markets Mature Markets
Sources: IDC,’s Digital Universe Study, 2014
USA, Canada, Western Europe,
Australia, NZ, and Japan
China, India, Mexico, Brazil,
and Russia
By 2020, about 90% of all data will be
unstructured, driven by Consumer Images,
Voice, and the Web
Emerging Markets will Surpass Mature
Markets before 2017 regarding data creation.
Data Creation by Type (ZB)
11
16. Latency: ~100x
Size of Data: ~1,000x 1000x faster than NAND
1000x higher endurance of NAND
10x denser than DRAM
Technology claims are based on comparisons of latency, density, and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory
products against internal Intel specifications.
3DXpOINT™
TECHNOLOGY
New Class of
Non-Volatile Memory
17. 17
Intel®SolidStateDrives
“The only SSDs that never ever
gave me any issues like timeouts,
task aborts… are Intel DC S3700s”
From a post on ceph-devel*
Source:
http://ceph.com/r esources/maili ng-‐‑list-‐‑irc
18. 18
Intel’sroleinstorage
Advance theIndustry
OpenSource&Standards
BuildanOpenEcosystem
Intel®StorageBuilders
Endusersolutions
Cloud,Enterprise
IntelTechnologyLeadership
Storage Optimized CPU’s
Intel® Xeon® E5v4 2600 Platform
Intel® Xeon® Processor D-1500Platform
Storage Optimized Software
Intel® Intelligent AccelerationLibrary
Intel® Storage Performance Development Kit
Non-Volatile Memory
3D Xpoint™
Intel® Solid StateDrives for Datacenter
>7 Cloud storage solutions
architectures
70+ partners
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such asSYSmark and MobileMark, are measured using specific
computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist
you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
helpingcustomerstoenableNextgen storage
Next gen solutions
architectures>26
>10 Enterprise storage solution
architectures
20. 20
Intel Ceph Contribution Timeline
2014 2015 2016
* Right Edge of box indicates approximate release date
New Key/Value Store
Backend (rocksdb)
Giant* Hammer Infernalis Jewel
CRUSH Placement
Algorithm improvements
(straw2 bucket type)
BluestoreBackend
Optimizations for NVM
Bluestore SPDK
Optimizations
RADOS I/O Hinting
(35% better EC Write erformance)
Cache-tieringwith SSDs
(Write support)
PMStore
(NVM-optimized backend
based on libpmem)
RGW, Bluestore
Compression,Encryption
(w/ ISA-L, QAT backend)
VirtualStorageManager
(VSM)Open Sourced
CeTune
Open Sourced
ErasureCoding
support with ISA-L
Cache-tieringwith SSDs
(Read support)
Client-sideBlock Cache
(librbd)
21. Ceph@Intel–2016CephFocusAreas
21
Optimize for Intel® platforms, flash and networking
• Compression, Encryption hardware offloads (QAT & SOCs)
• PMStore (for 3D XPoint DIMMs)
• RBD caching and Cache tiering with NVM
• IA optimized storage libraries to reduce latency (ISA-L, SPDK)
Performance profiling, analysis and community contributions
• All flash workload profiling and latency analysis
• Streaming,Database and Analytics workload drivenoptimizations
Ceph enterprise usages and hardening
• Manageability (Virtual Storage Manager)
• Multi Data Center clustering (e.g., async mirroring)
End Customer POCs with focus on broad industry influence
• CDN,Cloud DVR,Video Surveillance, Ceph Cloud Services, AnalyticsPOCs
Ready to use IA, Intel NVM optimized systems & solutions from OEMs & ISVs
• Ready to use IA, Intel NVM optimized systems & solutions from OEMs & ISVs
• Intel system configurations, white papers,case studies
• Industry events coverage
Go to
Market
Intel® Storage
AccelerationLibrary
(Intel® ISA-L)
Intel® StoragePerformance
DevelopmentKit (SPDK)
Intel® Cache
Acceleration
Software (Intel® CAS)
Virtual StorageManager
Ce-Tune Ceph
Profiler
23. 23
4KRandomRead&WritePerformanceSummaryFirst Ceph cluster to break 1 Million 4K random IOPS
Software
and
workloads
used
in
performance
tests
may
have
been
optimized
for
performance
only
on
Intel
microprocessors.
Any
difference
in
system
hardware
or
software
design
or
configuration
may
affect
actual
performance.
See
configuration
slides
in
backup
for
details
on
software
configuration
and
test
benchmark
parameters.
Workload Pattern Max IOPS
4K 100% Random Reads (2TB Dataset)
1.35Million
4K 100% Random Reads (4.8TB Dataset)
1.15Million
4K 100% Random Writes (4.8TB Dataset)
200K
4K 70%/30% Read/Write OLTP Mix
(4.8TB Dataset) 452K
Source: Openstack Summit 2015: Accelerating Cassandraworkloadson ceph with all flashpcie ssds
24. Red Hat Ceph Reference Architecture Documents
24
25. MetaFormulaforCephDeployments
• Have a general understanding of the use cases you want to support with Ceph
• Understand the kind of performance or cost/performance you want to deliver
• Refer to a reference architecture resource to match your use case(s) with
known and measured reference architectures:
• http://www.redhat.com/en/resources/performance-and-sizing-guide-red-hat-
ceph-storage-qct-servers
• https://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro-
INC0270868_v2_0715.pdf
• These documents have Ceph config, tuning and best practices guidance
• Additional help is available from Red Hat, including support and quick start
25