HPC DAY 2017 - http://www.hpcday.eu/
HPE Storage and Data Management for Big Data
Volodymyr Saviak | CEE HPC & POD Sales Manager at HPE
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-DataHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
The network part in accelerating Machine-Learning and Big-Data
Boris Neiman | Sr. System Engineer at Mellanox
HPC DAY 2017 | HPE Strategy And Portfolio for AI, BigData and HPCHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
HPE Strategy And Portfolio for AI, BigData and HPC
Volodymyr Saviak | CEE HPC & POD Sales Manager at HPE
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
HPC DAY 2017 | Altair's PBS Pro: Your Gateway to HPC ComputingHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Altair's PBS Pro: Your Gateway to HPC Computing
Dr. Jochen Krebs | Director Enterprise Sales Central & Eastern Europe at Altaire
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...Anand Haridass
An unprecedented increase in the use of digital devices is causing an explosion in the amount of data generated & captured by businesses. The need to extract economic value from all this "Big Data", that has the potential to transform businesses completely, is immense and drives a whole slew of new workloads. Organizations need to continuously align strategy, business processes and infrastructure investments to derive these insights. This session will talk to how solutions based on POWER deliver this in a cost-effective, open, scalable, high performing and reliable manner.
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu India
Fujitsu has decades of experience designing and manufacturing servers. Their PRIMERGY servers are known for best-in-class quality that ensures continuous operation with almost no unplanned downtimes. This is achieved through rigorous testing and manufacturing processes in their state-of-the-art factories in Germany. Fujitsu's demand-driven manufacturing approach allows them to produce servers flexibly based on current orders, enabling fast response times and fulfilling individual customer requests.
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
The document discusses IBM AI solutions on Power systems. It provides an overview of key features including OpenPOWER collaboration, IBM machine learning and deep learning solutions designed for faster results, and Power9 servers adopted by research institutions. It then discusses specific IBM Power systems like the IBM Power AC922 that are optimized for AI workloads through features like CPU-GPU NVLink and large model support in TensorFlow.
Mellanox is a supplier of interconnect solutions headquartered in Israel with worldwide offices and over 2,700 employees. It provides adapters, switches, cables, and transceivers for high-speed InfiniBand and Ethernet connectivity. Mellanox's solutions accelerate high performance computing and artificial intelligence workloads through technologies like GPUDirect, RDMA, and in-network computing capabilities. Mellanox's products are used to build several of the world's fastest supercomputers and its technologies help unlock the power of artificial intelligence for leading companies.
"Huawei focuses on R&D of IT infrastructure, cooling solutions, software integration, and provides end-to-end HPC solution by building ecosystems with partners. Huawei help customers from different sectors and fields, solving challenges and problems with computing resources, energy expenditure and business needs. This presentation will introduce how Huawei brings fresh technologies to next-generation HPC solutions for more innovation, higher efficiency and scale, as well as presenting our best practices for HPC."
Watch the video presentation: http://wp.me/p3RLHQ-f8J
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IBM AI Solutions on Power Systems is a presentation about IBM's AI solutions. It introduces IBM Visual Insights for tasks like image classification, object detection, and segmentation. A use case demo shows breast cancer classification in under one second with high accuracy. Another demo detects diabetic retinopathy in eye images. The presentation discusses open issues in medical imaging AI and IBM's response to COVID-19, including an X-ray demo to detect COVID-19 in lung images. It calls for collaboration to share medical data and models.
IBM provides infrastructure to accelerate medical research tasks like genomics, molecular simulation, diagnostics, and quality inspection. This infrastructure delivers faster insights through high-performance data and AI deployed at massive scale on IBM Power Systems and Storage. Case studies show the infrastructure reduces time to results for tasks like processing millions of cryogenic electron microscope images from days to hours.
Wang XueSong has 9 years of experience in storage and currently works as a storage architect at Huawei Technologies. His roles have included software engineer, system engineer, and senior architect. He has experience developing several of Huawei's OceanStor storage solutions. Huawei's OceanStor 18000 V3 is a high-end storage solution that provides double fault tolerance, fast rebuilds under 30 minutes per terabyte, and industry-leading performance of 3 million IOPS with sub-1 millisecond latency. It serves as a new benchmark for high-end storage and can scale on demand to meet changing workloads such as cloud, big data, and backup/protection.
Ibm symp14 referentin_barbara koch_power_8 launch bkIBM Switzerland
The document discusses IBM's Power Systems and how they are designed for big data and analytics workloads. Some key points:
- Power8 processors deliver 82x faster insights for business intelligence and analytics workloads compared to x86 servers.
- Power Systems create an open ecosystem for innovation through the OpenPOWER Foundation and enable industry partners to build servers optimized for the Power architecture.
- Power Systems foster open innovation for cloud applications by allowing over 95% of Linux applications written in common languages to run with no code changes.
- Power Systems are optimized for big data and analytics through features like high core counts, large memory and cache sizes, and high bandwidth I/O.
The advance of solid state disk as a replacement for mechanical disk is driving gains in application performance and computing efficiency. However, the same advancements are also driving even more revolutionary changes in system memory. A new and even higher performance persistent data storage layer is emerging that will enable broad adoption of the as yet unrealized power real time computing.
This document discusses DDN's optimization of Lustre and GPFS file systems. It provides an overview of DDN's extensive testing and benchmarking facilities and describes their long involvement with the Lustre file system, including major contributions to the open source code. It also presents performance results demonstrating the benefits of various DDN technologies and configurations.
The document discusses HP's strategy to provide IT infrastructure as a service (ITaaS). It outlines HP's portfolio of converged storage solutions including 3PAR, StoreOnce, StoreAll, and StoreVirtual. These solutions provide scalable, software-defined storage that can be deployed from small to large organizations to enable private and public cloud storage services. HP storage solutions are designed to improve efficiency, performance and manageability while reducing costs compared to traditional SAN solutions.
This document provides an overview of HPE solutions for challenges in AI and big data. It discusses HPE storage solutions including aggregated storage-in-compute using NVMe devices, tiered storage using flash, disk, and object storage, and zero watt storage to reduce power usage. It also covers the Scality object storage platform and WekaIO parallel file system for all-flash environments. The document aims to illustrate how HPE technologies can provide efficient, scalable storage for challenging AI and big data workloads.
The document discusses storage challenges facing organizations such as increasing data volumes and dynamic workloads. It introduces Oracle's approach to engineered systems that integrate optimized hardware and software to simplify storage management. Key benefits highlighted include automatic database and storage tuning, advanced data compression techniques, and optimized solutions for Oracle databases and applications.
Dror Goldenberg from Mellanox presented this deck at the HPC Advisory Council Switzerland Conference.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
Watch the video presentation: http://wp.me/p3RLHQ-f7s
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
The document discusses how big data and analytics can transform businesses. It notes that the volume of data is growing exponentially due to increases in smartphones, sensors, and other data producing devices. It also discusses how businesses can leverage big data by capturing massive data volumes, analyzing the data, and having a unified and secure platform. The document advocates that businesses implement the four pillars of data management: mobility, in-memory technologies, cloud computing, and big data in order to reduce the gap between data production and usage.
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct attached storage?" is frequently asked. This webcast explores both technical and business reasons for implementing a switched FICON architecture instead of a direct attached storage FICON architecture for mainframe attached storage. The discussion will also include an overview of the Hitachi Data Systems and Brocade solutions for mainframe environments. By viewing this webcast, you’ll learn: The business and technical value of networking FICON attached storage instead of direct attached. The business and technical value of Hitachi mainframe storage capabilities. The offerings from Hitachi Data Systems and Brocade that can help you achieve the benefits of networked FICON storage. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
This document discusses using Ceph storage with Apache Hadoop to provide a scalable and efficient storage solution for big data workloads. It outlines the challenges of scaling Hadoop storage independently from compute resources using the native Hadoop Distributed File System. The solution presented is to use the open source Ceph storage system instead of direct-attached storage. This allows Hadoop compute and storage resources to scale independently and provides a centralized storage platform for all enterprise data workloads. Performance tests showed the Ceph and Hadoop configuration providing up to a 60% improvement in I/O performance when using Intel caching software and SSDs.
Greenplum: Driving the future of Data Warehousing and Analyticseaiti
Greenplum provides a massively parallel processing (MPP) database for data warehousing and analytics. Their Enterprise Data Cloud initiative aims to address the challenges of commodity hardware, massive data scales, and user expectations by providing a platform for extreme scale, self-service provisioning of databases, and unified data access across a company. This new architecture directly addresses customer needs around business issues and opportunities by enabling elastic expansion, rapid creation of data marts and warehouses, and easy publishing and sharing of enterprise data.
The document discusses setting up a Raspberry Pi home server with OSMC and installing RetroPie for emulation. It includes instructions for installing OSMC, configuring the network interface, and installing packages like Webmin. Methods for connecting external storage are provided for storing ROMs and sharing files with Windows. Links are included for RetroPie installation scripts and addons for emulation frontend integration with OSMC. The presentation concludes by thanking the audience.
«When systems are not just dozens of subsystems, but dozens of engineering teams, even our best and most experienced engineers routinely guess wrong about the root cause of poor end-to-end performance» — that’s what think in Google.
Latency tracing approach helps Google and many other companies to control stability and performance as well as helps to find root causes of performance degradation even in huge and complex distributed systems.
I’ll tell about what is latency tracing, how that helps you, and how you can implement it in your project. Finally I will show live demo using such tools as Dynatrace and Zipkin.
examples: https://github.com/kslisenko/java-performance
http://javaday.org.ua/kanstantsin-slisenka-profiling-distributed-java-applications/
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
The document discusses IBM AI solutions on Power systems. It provides an overview of key features including OpenPOWER collaboration, IBM machine learning and deep learning solutions designed for faster results, and Power9 servers adopted by research institutions. It then discusses specific IBM Power systems like the IBM Power AC922 that are optimized for AI workloads through features like CPU-GPU NVLink and large model support in TensorFlow.
Mellanox is a supplier of interconnect solutions headquartered in Israel with worldwide offices and over 2,700 employees. It provides adapters, switches, cables, and transceivers for high-speed InfiniBand and Ethernet connectivity. Mellanox's solutions accelerate high performance computing and artificial intelligence workloads through technologies like GPUDirect, RDMA, and in-network computing capabilities. Mellanox's products are used to build several of the world's fastest supercomputers and its technologies help unlock the power of artificial intelligence for leading companies.
"Huawei focuses on R&D of IT infrastructure, cooling solutions, software integration, and provides end-to-end HPC solution by building ecosystems with partners. Huawei help customers from different sectors and fields, solving challenges and problems with computing resources, energy expenditure and business needs. This presentation will introduce how Huawei brings fresh technologies to next-generation HPC solutions for more innovation, higher efficiency and scale, as well as presenting our best practices for HPC."
Watch the video presentation: http://wp.me/p3RLHQ-f8J
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IBM AI Solutions on Power Systems is a presentation about IBM's AI solutions. It introduces IBM Visual Insights for tasks like image classification, object detection, and segmentation. A use case demo shows breast cancer classification in under one second with high accuracy. Another demo detects diabetic retinopathy in eye images. The presentation discusses open issues in medical imaging AI and IBM's response to COVID-19, including an X-ray demo to detect COVID-19 in lung images. It calls for collaboration to share medical data and models.
IBM provides infrastructure to accelerate medical research tasks like genomics, molecular simulation, diagnostics, and quality inspection. This infrastructure delivers faster insights through high-performance data and AI deployed at massive scale on IBM Power Systems and Storage. Case studies show the infrastructure reduces time to results for tasks like processing millions of cryogenic electron microscope images from days to hours.
Wang XueSong has 9 years of experience in storage and currently works as a storage architect at Huawei Technologies. His roles have included software engineer, system engineer, and senior architect. He has experience developing several of Huawei's OceanStor storage solutions. Huawei's OceanStor 18000 V3 is a high-end storage solution that provides double fault tolerance, fast rebuilds under 30 minutes per terabyte, and industry-leading performance of 3 million IOPS with sub-1 millisecond latency. It serves as a new benchmark for high-end storage and can scale on demand to meet changing workloads such as cloud, big data, and backup/protection.
Ibm symp14 referentin_barbara koch_power_8 launch bkIBM Switzerland
The document discusses IBM's Power Systems and how they are designed for big data and analytics workloads. Some key points:
- Power8 processors deliver 82x faster insights for business intelligence and analytics workloads compared to x86 servers.
- Power Systems create an open ecosystem for innovation through the OpenPOWER Foundation and enable industry partners to build servers optimized for the Power architecture.
- Power Systems foster open innovation for cloud applications by allowing over 95% of Linux applications written in common languages to run with no code changes.
- Power Systems are optimized for big data and analytics through features like high core counts, large memory and cache sizes, and high bandwidth I/O.
The advance of solid state disk as a replacement for mechanical disk is driving gains in application performance and computing efficiency. However, the same advancements are also driving even more revolutionary changes in system memory. A new and even higher performance persistent data storage layer is emerging that will enable broad adoption of the as yet unrealized power real time computing.
This document discusses DDN's optimization of Lustre and GPFS file systems. It provides an overview of DDN's extensive testing and benchmarking facilities and describes their long involvement with the Lustre file system, including major contributions to the open source code. It also presents performance results demonstrating the benefits of various DDN technologies and configurations.
The document discusses HP's strategy to provide IT infrastructure as a service (ITaaS). It outlines HP's portfolio of converged storage solutions including 3PAR, StoreOnce, StoreAll, and StoreVirtual. These solutions provide scalable, software-defined storage that can be deployed from small to large organizations to enable private and public cloud storage services. HP storage solutions are designed to improve efficiency, performance and manageability while reducing costs compared to traditional SAN solutions.
This document provides an overview of HPE solutions for challenges in AI and big data. It discusses HPE storage solutions including aggregated storage-in-compute using NVMe devices, tiered storage using flash, disk, and object storage, and zero watt storage to reduce power usage. It also covers the Scality object storage platform and WekaIO parallel file system for all-flash environments. The document aims to illustrate how HPE technologies can provide efficient, scalable storage for challenging AI and big data workloads.
The document discusses storage challenges facing organizations such as increasing data volumes and dynamic workloads. It introduces Oracle's approach to engineered systems that integrate optimized hardware and software to simplify storage management. Key benefits highlighted include automatic database and storage tuning, advanced data compression techniques, and optimized solutions for Oracle databases and applications.
Dror Goldenberg from Mellanox presented this deck at the HPC Advisory Council Switzerland Conference.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
Watch the video presentation: http://wp.me/p3RLHQ-f7s
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
The document discusses how big data and analytics can transform businesses. It notes that the volume of data is growing exponentially due to increases in smartphones, sensors, and other data producing devices. It also discusses how businesses can leverage big data by capturing massive data volumes, analyzing the data, and having a unified and secure platform. The document advocates that businesses implement the four pillars of data management: mobility, in-memory technologies, cloud computing, and big data in order to reduce the gap between data production and usage.
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct attached storage?" is frequently asked. This webcast explores both technical and business reasons for implementing a switched FICON architecture instead of a direct attached storage FICON architecture for mainframe attached storage. The discussion will also include an overview of the Hitachi Data Systems and Brocade solutions for mainframe environments. By viewing this webcast, you’ll learn: The business and technical value of networking FICON attached storage instead of direct attached. The business and technical value of Hitachi mainframe storage capabilities. The offerings from Hitachi Data Systems and Brocade that can help you achieve the benefits of networked FICON storage. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
This document discusses using Ceph storage with Apache Hadoop to provide a scalable and efficient storage solution for big data workloads. It outlines the challenges of scaling Hadoop storage independently from compute resources using the native Hadoop Distributed File System. The solution presented is to use the open source Ceph storage system instead of direct-attached storage. This allows Hadoop compute and storage resources to scale independently and provides a centralized storage platform for all enterprise data workloads. Performance tests showed the Ceph and Hadoop configuration providing up to a 60% improvement in I/O performance when using Intel caching software and SSDs.
Greenplum: Driving the future of Data Warehousing and Analyticseaiti
Greenplum provides a massively parallel processing (MPP) database for data warehousing and analytics. Their Enterprise Data Cloud initiative aims to address the challenges of commodity hardware, massive data scales, and user expectations by providing a platform for extreme scale, self-service provisioning of databases, and unified data access across a company. This new architecture directly addresses customer needs around business issues and opportunities by enabling elastic expansion, rapid creation of data marts and warehouses, and easy publishing and sharing of enterprise data.
The document discusses setting up a Raspberry Pi home server with OSMC and installing RetroPie for emulation. It includes instructions for installing OSMC, configuring the network interface, and installing packages like Webmin. Methods for connecting external storage are provided for storing ROMs and sharing files with Windows. Links are included for RetroPie installation scripts and addons for emulation frontend integration with OSMC. The presentation concludes by thanking the audience.
«When systems are not just dozens of subsystems, but dozens of engineering teams, even our best and most experienced engineers routinely guess wrong about the root cause of poor end-to-end performance» — that’s what think in Google.
Latency tracing approach helps Google and many other companies to control stability and performance as well as helps to find root causes of performance degradation even in huge and complex distributed systems.
I’ll tell about what is latency tracing, how that helps you, and how you can implement it in your project. Finally I will show live demo using such tools as Dynatrace and Zipkin.
examples: https://github.com/kslisenko/java-performance
http://javaday.org.ua/kanstantsin-slisenka-profiling-distributed-java-applications/
LinuxKit, a toolkit for building custom minimal, immutable Linux distributions.
Secure defaults without compromising usability
Everything is replaceable and customisable
Immutable infrastructure applied to building Linux distributions
Completely stateless, but persistent storage can be attached
Easy tooling, with easy iteration
Built with containers, for running containers
Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit
Designed to be managed by external tooling, such as Infrakit or similar tools
Includes a set of longer-term collaborative projects in various stages of development to innovate on kernel and userspace changes, particularly around security
HPC DAY 2017 | Prometheus - energy efficient supercomputingHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Prometheus - energy efficient supercomputing
Marek Magrys | Manager of Mass Storage Departament, ACC Cyfronet AGH-UST
Model Simulation, Graphical Animation, and Omniscient Debugging with EcoreToo...Benoit Combemale
You have your shiny new modeling language up and running thanks to the Eclipse Modeling Technologies and you built a powerful graphical editor with Sirius to support it. But how can you see what is going on when a model is executed? Don't you need to debug your design in some way? Wouldn't you want to see your editors being animated directly within your modeling environment based on execution traces or simulator results?
In this talk, we will present Sirius Animator, an add-on to Sirius that provides you a tool- supported approach to complement a modeling language with an execution semantics and a graphical description of an animation layer. The execution semantics is defined thanks to ALE, an Action Language for EMF integrated into Ecore Tools to modularly implement the bodies of your EOperations, and the graphical description of the animation layer is defined thanks to Sirius. From both inputs, Sirius Animator automatically provides an advanced and extensible environment for model simulation, animation and debugging, on top of the graphical editor of Sirius and the debug UI of Eclipse. To illustrate the overall approach, we will demonstrate the ability to seamlessly extend Arduino Designer, in order to provide an advanced debugging environment that includes graphical animation, forward/backward step-by-step, breakpoint definition, etc.
1. The document discusses the history and evolution of GPUs and GPGPU programming. It describes how GPUs started as dedicated graphics cards but now have programmable capabilities through shaders.
2. It explains the key concepts of GPGPU including the host/device model, memory models, and execution models using concepts like work items, work groups, and ND ranges.
3. The document uses OpenCL as an example programming model, covering memory transfers between host and device, data types, and how a matrix multiplication kernel could be implemented in OpenCL using the execution model.
Database Security Threats - MariaDB Security Best PracticesMariaDB plc
The document discusses database security best practices for MariaDB, including threats from the internet, applications, excessive trust, and recommendations for defense. It provides guidance on encryption, using a database proxy like MaxScale, user management, auditing, and security features of MariaDB and MaxScale such as authentication, encryption, attack protection, and data masking.
Libnetwork, Docker's networking plugin, has updated to be compatible with the Container Network Interface (CNI) specification used by Kubernetes. Libnetwork drivers can now be used as CNI plugins without changes to the CNM object model or Libnetwork core. This allows full integration with Kubernetes networking and use of Libnetwork's various networking drivers like bridge, overlay, and MacVlan through the CNI interface.
GPU databases - How to use them and what the future holdsArnon Shimoni
GPU databases are the hottest new thing, with about 7 different companies producing their own variant. In this session, we will discuss why they were created, how they are already disrupting the database world, and what the future of computing holds for them.
This presentation demonstrates how the power of NVIDIA GPUs can be leveraged to both accelerate speed to insight and to scale the amount of hot and warm data analyzed to meet the increasing demands of data scientists and business intelligence professionals alike, as well as to find tactical and strategic insights with greater speed on exponentially growing datasets.
Organizations commonly believe that they are advancing in analytical capabilities due to the rise in the data science profession and the myriad of technologies available for analytics, business intelligence, artificial intelligence and machine learning. However, if you do the math, they are actually falling behind as the increases in the rates of data collection volume far outpace the rate of increases in hot and warm data used for analytics. This is causing organizations to rely on an ever-decreasing percentage of their information assets for decision making.
We talk about why GPU databases were created and share what sets SQream apart from other GPU databases, MPP solutions, in memory and Hadoop based analytic alternatives.
We will also outline how an organization can use GPU databases to thrive in the information revolution by using a significantly greater percentage of its data for analytical purposes, obtaining insights that are desired today, and will remain cost-effective into the next few years when data lakes are expected to balloon from petabytes to exabytes.
The document discusses design patterns in Java, including the Factory, Abstract Factory, Singleton, and Prototype patterns. It provides code examples to illustrate each pattern. The Factory pattern section shows how to create objects from subclasses without specifying the exact class. The Abstract Factory pattern section demonstrates creating families of related objects without specifying their concrete classes. The Singleton pattern section covers ensuring only one instance of a class is created. And the Prototype pattern section shows copying or cloning existing objects to create new ones instead of using expensive constructors.
Getting Started with Embedded Python: MicroPython and CircuitPythonAyan Pahwa
This document provides an introduction to MicroPython and CircuitPython, which allow Python programming on microcontrollers. MicroPython is a stripped-down version of Python 3 that runs directly on microcontrollers. It includes APIs for hardware modules like GPIOs, UART, PWM, etc. CircuitPython is a fork of MicroPython maintained by Adafruit for use on their educational boards. The document discusses supported boards, functions, libraries, and ways to interact with MicroPython boards through the serial REPL, web REPL, file system, emulation, and demos blinking an LED and measuring temperature/humidity.
2018-11-06: Unfortunately, LinkedIn/Slideshare disabled the update functionality and, thus, I had to upload an updated version of this introduction to OMNeT++ as new presentation. It is available here: https://www.slideshare.net/christian.timmerer/an-introduction-to-omnet-54
Calico is an open-source project that provides a layer 3 network implementation for scalable datacenter deployments using minimal packet encapsulation for better resource usage and network simplicity. It is highly scalable, secure, and simple compared to traditional overlays. Calico integrates with major orchestrators and uses standard APIs and protocols to provide a seamless experience for developers and operators while supporting communication with existing network switches and routers. It consists of components like Felix, orchestrator plugins, etcd, and BIRD that run on endpoints and distribute routing information.
Vert.x is a toolkit or platform for implementing reactive applications on the JVM.
Vert.x is an open-source project at the Eclipse Foundation. Vert.x was initiated in 2012 by Tim Fox.
General Purpose Application Framework, Polyglot (Java, Groovy, Scala, Kotlin, JavaScript, Ruby and Ceylon), Event Driven, non-blocking, Lightweight & fast, Reusable modules.
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...ScyllaDB
Benchmarks are fun to do but when going to production, all sorts of things can happen: anything from hardware outages to human error bringing your database down. Even in a healthy database, a lot of maintenance operations have to periodically run. Do you have the tools necessary to make sure you are good to go?
Key transparency: Blockchain meets NoiseSocket / Алексей Ермишкин (Virgil)Ontico
HighLoad++ 2017
Зал «Москва», 7 ноября, 18:00
Тезисы:
http://www.highload.ru/2017/abstracts/2860.html
Key transparency и Coniks - одни из первых примеров использования Blockchain не как инструмента для криптовалюты или транзакций, а в своём первоначальном значении - контроле за целостностью информации. Они позволяют хранить информацию о публичных ключах в доступном для аудита виде, при этом обладают такими уникальными функциями, как защита от утечки идентификаторов, строгое доказательство наличия и даже (!) отсутствия записи в блокчейне.
...
Пиксельные шейдеры для Web-разработчиков. Программируем GPU / Денис Радин (Li...Ontico
HighLoad++ 2017
Зал «Москва», 7 ноября, 15:00
Тезисы:
http://www.highload.ru/2017/abstracts/3017.html
5 лет назад шейдеры перевернули мир компьютерной графики, став технологией, ответственной за все впечатляющие спец. эффекты, которые мы видим в компьютерных играх. Сейчас они готовы перевернуть веб.
Доклад расскажет об истории и причинах появления пиксельных шейдеров, механизме работы и вариантах использования в разработке обычных Web-приложений. После прохождения курса теории мы перейдем к пошаговому мастер-классу. Как насчет написания своего первого пиксельного шейдера?
Logging and ranting / Vytis Valentinavičius (Lamoda)Ontico
HighLoad++ 2017
Зал «Пекин+Шанхай», 7 ноября, 16:00
Тезисы:
http://www.highload.ru/2017/abstracts/2842.html
A story about real life experience in Lamoda, featuring logging, forest animals, limited size buffers and morning routines.
Possible takeaways from this presentation:
1. Understanding the need of central log aggregation
2. Learning a few tips about logging and event aggregation
3. Saving a lot of money by implementing your own personal "poor-man's" NewRelic
...
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)Ontico
HighLoad++ 2017
Зал «Москва», 7 ноября, 13:00
Тезисы:
http://www.highload.ru/2017/abstracts/2909.html
OpenDataPlane (ODP, https://www.opendataplane.org) является open-source-разработкой API для сетевых data plane-приложений, представляющий абстракцию между сетевым чипом и приложением. Сейчас вендоры, такие как TI, Freescale, Cavium, выпускают SDK с поддержкой ODP на своих микросхемах SoC. Если проводить аналогию с графическим стеком, то ODP можно сравнить с OpenGL API, но только в области сетевого программирования.
...
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
This document discusses how data growth driven by mobile, social media, IoT, and big data/cloud is requiring a fundamental shift in storage cost structures from scale-up to scale-out architectures. It provides an overview of key storage technologies and workloads driving public cloud storage, and how Ceph can help deliver on the promise of the cloud by providing next generation storage architectures with flash to enable new capabilities in small footprints. It also illustrates the wide performance range Ceph can provide for different workloads and hardware configurations.
NetApp provides an enterprise-grade all-flash storage solution called AFF (All Flash FAS) that delivers flash performance and data services. SolidFire is another all-flash storage platform in NetApp's portfolio that is designed for large-scale infrastructure and can guarantee performance to thousands of applications through its quality of service features. The document discusses the benefits of flash storage and how NetApp's solutions help customers transform their data centers and lower costs through flash innovation like inline data compaction in ONTAP 9.
This document discusses the NetApp E5500 storage solution for Lustre file systems. It provides three key points:
1) The NetApp E5500 is designed to meet the demands of large Lustre file systems including supporting over 100TB of storage, 100,000 clients, and independent scaling of clients, storage, and bandwidth.
2) Lustre is an open source parallel file system used on over 60% of the world's largest supercomputers that separates data from metadata to deliver scale and performance.
3) Test results show the E5500 can deliver over 7,200 sustained MBps of throughput from compute nodes to a 250TB Lustre file system, demonstrating its
DDN: Massively-Scalable Platforms and Solutions Engineered for the Big Data a...inside-BigData.com
In this talk from the DDN User Group at ISC’13, James Coomer from DataDirect Networks presents: Massively-Scalable Platforms and Solutions Engineered for the Big Data and Cloud Era.
Watch the presentation here: http://insidehpc.com/2013/06/26/video-james-coomer-keynotes-ddn-user-group-at-isc13/
This document summarizes Pivot3's hyperconverged infrastructure (HCI) solutions. It highlights that Pivot3's distributed scale-out architecture pools compute and storage resources across nodes for maximum utilization. It also notes that Pivot3's patented erasure coding provides efficient high availability that maintains performance during failures. The summary concludes that Pivot3's solutions offer scalable storage and performance to meet application needs.
MT47 Modernize infrastructure for a modern data centerDell EMC World
Today's businesses need speed, efficiency and agility to deliver services back to their stakeholders, all at an affordable price. In the Modern Data Center, Flash, along with Scale-out, software-defined solutions, help to automate a modern infrastructure, the foundation of the modern data center. This session will show you how Dell EMC's industry leading storage portfolio can transform your company's infrastructure and drive your success. In addition, learn how to protect your modern data center with Dell EMC’s comprehensive data protection portfolio.
Follow us at @DellEMCStorage
Learn more about Dell EMC All-Flash Solutions at DellEMC.com/All-flash.
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
In questa sessione HPE e SUSE illustrano con casi reali come HPE Data Management Framework e SUSE Enterprise Storage permettano di risolvere i problemi di gestione della crescita esponenziale dei dati realizzando un’architettura software-defined flessibile, scalabile ed economica. (Alberto Galli, HPE Italia e SUSE)
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
The document discusses the future of software-defined storage in 3 years. It predicts that storage media will continue to advance with higher capacities and lower latencies using technologies like 3D NAND and NVDIMMs. Networking and interconnects like NVMe over Fabrics will allow disaggregated storage resources to be pooled and shared across servers. Software-defined storage platforms will evolve to provide common services for distributed data platforms beyond just block storage, with advanced data placement and policy controls to optimize different workloads.
HP Storage: Delivering Storage without Boundariesjameshub12
HP provides several storage solutions including the P4000, 3PAR, X9000, and StoreOnce/D2D. The P4000 is a scale-out SAN optimized for server virtualization that offers features like thin provisioning, snapshots, and high availability. The 3PAR storage platform is designed for utility computing and reduces costs while improving storage management efficiency. The X9000 provides scale-out NAS for file serving workloads. StoreOnce/D2D offers scalable, efficient data deduplication and replication.
Huawei's all-flash storage solution, OceanStor Dorado, provides up to 3x improved application performance and 75% savings in operational expenses compared to conventional storage. It offers lightning fast performance with sub-millisecond latency and rock solid reliability of 99.9999% availability through its intelligent data management features and flash-native design. Huawei has over 10 years of experience developing solid state drives and all-flash arrays and their latest Dorado18000 V3 is positioned as the highest-end solution supporting NVMe protocol for the most demanding workloads.
VirtualStor Extreme - Software Defined Scale-Out All Flash StorageGIGABYTE Technology
VirtualStor is a software-defined storage platform that aggregates and optimizes all storage resources to provide flexible storage solutions for any environment or application. It uses a scale-out architecture to deliver up to 10 million IOPS and 1PB of storage. VirtualStor offers high performance with sub-millisecond latency, low write amplification to extend SSD life, and the ability to consolidate and seamlessly migrate data from existing storage.
In this video from SC15, Larry Jones from Seagate provides an overview of the company's revamped HPC storage product line. At SC15, Seagate announced a major expansion of its HPC product portfolio including the ClusterStor HPC hard disk drive designed for Big Data applications.
Learn more: http://www.seagate.com/products/enterprise-servers-storage/enterprise-storage-systems/clustered-file-systems/
Watch the video presentation: http://wp.me/p3RLHQ-eMC
Sign up for our insideHPC Newsletter
Pilot Hadoop Towards 2500 Nodes and Cluster RedundancyStuart Pook
Hadoop has become a critical part of Criteo's operations. What started out as a proof of concept has turned into two in-house bare-metal clusters of over 2200 nodes. Hadoop contains the data required for billing and, perhaps even more importantly, the data used to create the machine learning models, computed every 6 hours by Hadoop, that participate in real time bidding for online advertising.
Two clusters do not necessarily mean a redundant system, so Criteo must plan for any of the disasters that can destroy a cluster.
This talk describes how Criteo built its second cluster in a new datacenter and how to do it better next time. How a small team is able to run and expand these clusters is explained. More importantly the talk describes how a redundant data and compute solution at this scale must function, what Criteo has already done to create this solution and what remains undone.
This document provides information on Overland Storage products including SnapServer NAS solutions, RDX removable disk storage, NEO tape automation platforms, and SnapScale scale-out clustered NAS. It describes various models within each product line including specifications and use cases. Key features highlighted include scaling capacity, data protection, replication, performance, and support for unstructured data storage across multiple locations.
See how Dell works efficiently with VMware to provide innovative architectures that are scalable and flexible. Learn about servers, networking, storage, and comprehensive systems management
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About? By: Kamesh Pemmaraju,Neil Levine
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you! In this two part session you learn details of: • the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas. • best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
HPC DAY 2017 | HPE Storage and Data Management for Big Data
1. HPE Storage and
Data Management
for Big Data
1
Volodymyr Saviak, HPE HPC Sales Manager
2. Agenda
1. Different HPC Storage Requirements and solutions. Intro – 5 minutes
2. HPC Lustre based storage – 10 minutes
3. NVME low latency storage – 5 minutes
4. DMF as data management – 10 minutes
2
3. Mission Critical Storage and HPC Storage Differences
3
Mission Critical Storage HPC Storage
Critical buying factors
• Robustness
• Features
• Price, price performance
• Throughput
• Capacity
Willing to
compromise on
• Price, price performance
• Throughput
• Capacity
• Robustness
• Features
Summary: Need different products for different markets
4. Different kind of solutions we deliver
Features Parallel Performance Low Latency Storage Active Archive
Capacity ++ (100PB) +(250TB) +++(500PB)
Multiple nodes parallel
file/object access
+++(many thousands) ++(128/512) +
High Bandwidth +++ ++ ++
Bandwidth per sNode +++ +++ +
High IOps ++ +++ +
Low Latency + ++ -
Disaster Tolerance - - +++
Heterogeneous access -/+ -/+ +++
Multiprotocol access - - ++
4
5. Recent requests coming from the fields
Typical technical requirements (extract)
– HPC customer wants to build 1PByte Storage for Genome Sequencing with 30GByte/s write performance
– Bank wants to build 100TByte storage with near memory performance for Oracle Data Warehouse
– Telco wants to build 3PByte storage with 80GByte/s write throughput
– Government organization wants to build 10PByte archive with 99.9999% data availability
Confidential – For Training Purposes Only 5
Scale-out storage helps to grow storage capacity without pain
7. Accelerate Your Workload With HPE NVMe SSDs
Reliability
for continuous performance with
less downtime
Efficiency
for lower TCO
Performance
for faster business results
8. Difference between SATA and NVMe
–NVMe is actually not an
interface but a language.
–NVMe is a different language
that is optimized to reduce
overhead when making
requests to the SSD.
8
9. NVMe Deployment Challenges
9
App
App
App
App
App
App
All Flash Arrays Server-side Flash
• Array software that does not take full advantage of flash characteristics
• Network and fabric latencies
• I/O stack bottlenecks
• Capacity challenges: provisioning optimization is difficult
• Creates data locality issue
• No centralized management
• Low utilization rates
Applications can have access to large pools
of flash but with limitations
Applications benefit from maximum flash
performance but without shared data
10. Local NVMe Devices
10
Benefits
– Very Low Latency
– High IOPs
– High Throughput
– Commodity Pricing
The Reality
– DAS
– No Logical Volumes
– No Data Protection
– No High Availability
– No Application movement
– Excess (wasted) IOPs
11. How NVMe Scale-out Storage Works
11
Centralized Management
(GUI, RESTful HTTP)
Control Path
Effective Data Path
NVMe Client(s) NVMe Target
(unmodified)
Applications
Intelligent
Client Block
Driver
High Speed Network
R-NIC
CPU
NVMe Drive(s)
NVMe
Target Module
R-NIC
12. Converged, Disaggregated or mixed
12
Local Storage in Application Server Storage is Centralized
• Storage is unified into one pool
• NVMesh Target Module & Intelligent Client
Block Driver run on all nodes
• NVMesh bypasses server CPU
• Linearly scalable
I/O
I/O
NVMesh Target Module
Intelligent Client Block Driver
NVMesh Target Module
Intelligent Client Block Driver
• Storage is unified into one pool
• NVMesh Target Module runs on storage nodes
• Intelligent Client Block Driver runs on server nodes
• Applications get performance of local storage
13. Performance
13
Remote IOPS = Local IOPS
Remote BW = Local BW
Remote Lat. = Local Lat. + ~5us
2RU server with 24 NVMe drives:
> 4.9M 4KB IOPS
> 24GB/s
Scalability
20 servers, shared data, > 99%
efficiency
128 servers @ NASA > 130 GB/s
writing through a shared file
system
Converged-ready
Using RDDA, 0% target CPU
usage
Ubiquitous Access Highly Optimized
14. Customer Successes
Pooling NVMe enables
new Science use cases at NASA
Use Case
• Large-scale modeling, simulation, analysis and visualization
• Visualizes supercomputer simulation data on 128 monitors
from a 128 node cluster
Problem
• Interactive work is generally small IO's
• Introduction of high-performance local NVMe SSD’s
create the problem of data locality
Solution
NVMesh enables NASA to create a petabyte-scale unified pool of
high-performance flash distributed retaining the speeds and
latencies of directly-attached media
17. + XX GB/Sec
+ XX GB/Sec
+ XX,000 Metadata Ops
+ XX,000 Metadata Ops
Data Management | Lustre Designed to Scale Out
Management
Network
Object Storage
Targets (OSTs)
Metadata
Target (MDT)
Management
Target (MGT)
Storage servers grouped
into failover pairs
Data Network (LNET)
(InfiniBand/Ethernet/Omnipath)
Lustre Clients
Management Server (MGS)
Metadata Server (MDS)
Object Storage
Servers (OSS)
XX,000 Metadata Ops
Metadata
Target (MDT)
Metadata
Target (MDT)
+ XX,000 Metadata Ops XX GB/Sec
Object Storage
Targets (OSTs)
+ XX GB/Sec
Customer
Goal
60GB/sec
17GB/sec +
17GB/sec +
17GB/sec +
17GB/sec
18. Data Management | Lustre Update Shift to Community Lustre
–Intel initiated a process to consolidate its Lustre
efforts around a single version of Lustre that will be
available from the community as open source
–All proprietary elements of Intel Enterprise Edition
for Lustre were contributed by Intel to the community
–HPE will deliver an updated Apollo 4520 Lustre
solution based on Community Lustre in late 2H2017
ORIGINAL
Community Lustre
Intel Enterprise Edition Lustre
Intel Foundation Edition Lustre
Intel Cloud Edition Lustre
NEW
Community Lustre
ORIGINAL NEW
Integrated
on HPE
Hardware?
Yes Yes
Lustre
Version
Intel
Enterprise
Edition for
Lustre 3.1
Community
Lustre
2.10
L1 Support HPE HPE
L2 Support HPE HPE
L3 Support Intel Intel
19. Data Management | Lustre Roadmap and Relevance
Key Features
• Multi-Rail LNET for data
pipeline scalability
• Progressive File Layouts
for performance and more
efficient/balanced file
storage
• Data on MDT for direct
small file storage on MDT
(flash)
20. + XX GB/Sec
+ XX GB/Sec
+ XX,000 Metadata Ops
+ XX,000 Metadata Ops
Data Management | Lustre Multi-Rail LNET
Management
Network
Object Storage
Targets (OSTs)
Metadata
Target (MDT)
Management
Target (MGT)
Storage servers grouped
into failover pairs
Data Network (LNET)
(InfiniBand/Ethernet/Omnipath)
Lustre Clients
Management Server (MGS)
Metadata Server (MDS)
Object Storage
Servers (OSS)
XX,000 Metadata Ops
Metadata
Target (MDT)
Metadata
Target (MDT)
+ XX,000 Metadata Ops XX GB/Sec
Object Storage
Targets (OSTs)
+ XX GB/Sec
Multiple Fabric
Adapters/Connections
21. Data Management | Lustre Roadmap and Approach
Management
Network
Object Storage
Targets (OSTs)
Metadata
Target (MDT)
Management
Target (MGT)
Data Network (LNET)
(InfiniBand/Ethernet/Omnipath)
Lustre Clients
Storage
Monitoring
Management Server (MGS)
Metadata Server (MDS) Object Storage Servers (OSS)
Storage servers grouped
into failover pairs
Current Small File
I/O Model
New Small File
I/O Model
Client Data I/O
Small writes go to MDT Large writes go to OST
22. HPE Apollo 4520 Scalable Storage with Lustre
22
Designed for Petabyte-Scale Data Sets
Density Optimized Design For Scale
• Dense Storage Design Translates to Lower $/GB
• Linear performance and capacity scaling
ZFS for File Protection and Performance
ZFS file system provides advanced data protection
• ZFS RAID provides Snapshot, Compression & Error Correction
High Performance Storage Solution
Meets Demanding I/O requirements
• Up to 51GB/sec per rack using balanced architecture based on
4520 Lustre Server with D6020 JBODs
Services and support
Installation and support services
• Factory tested and validated, deployment services for installation
• 24/7 Support services
HPE & Partner Confidential
Apollo 4520 controller
26. Data Management | Lustre HSM Data Management Guidelines
26
Data always lives longer than the
hardware on which it is stored.
Forward migration to new technology
should never adversely impact the users.
27. Data Management | HPC Storage Landscape New Model
Key Takeaways
• Disaggregate and scale High-
Performance Storage Tier
Independently from Capacity Tier
• Co-locate Performance Tier with
Compute and Fabric
• Implement tiered Data Management
for Capacity Scaling and Data
Protection
High-Performance Storage
Capacity Storage, Protection
& Data Management
Compute
Tiered Data Movement and Management are a
Key Requirement – and HPE Data Management
Framework (DMF) meets that need
29. Data Management | DMF Advanced Tape Storage Integration
29
• DMF is certified with libraries from
Spectra Logic, Oracle (StorageTek),
IBM and HPE portfolio of tape libraries
• Support for latest LTO and Enterprise-
class drive technology
• Advanced feature support for
accelerated retrieval and automated
library management
• Certification guide for libraries and
drives is available – and updated
regularly
30. Data Management | DMF Object Storage Support
30
High-Performance File System
DMF Policy and Migration Engine
HPDAHPC
DMF Data Management Layer
Cloud &
Object
Storage
Offsite Data
Replication
DMF
Zero Watt
Storage Onsite Tape
Storage
Secure
Offsite Tape
NFS
RAID or Flash-based Storage
CIFS XFS CXFS Lustre
Object Storage System in
an DMF Architecture:
• Standards-based Integration:
– Use of S3 interface enables
compatibility with Scality, HGST
Active Archive, Amazon S3, CEPH,
NetApp StorageGrid, DDN WOS and
open source alternatives
• Accessibility:
– High resilience and data integrity for
a variety of use cases
• Scalability & Throughput:
– Scalable DMF connections to object
storage environment
– DMF Parallel Data Mover
architecture with high-availability and
failover
• Flexibility:
– Ability to blend object storage with
alternative storage options including
Zero Watt Storage (performance) or
tape (off-site disaster recovery)
31. Data Management | DMF Zero Watt Storage
31
High Performance & Density:
• 70 x 3.5” SAS drives in a 5U Enclosure
• Supports >600TB of usable storage per
enclosure with 10TB drives
• 4+ PB of usable capacity per rack
• High Performance: >10GB/sec per enclosure
streaming retrieval that is an excellent DMF
cache compliment to tape, object or cloud
storage
Zero Watt Storage Advanced Software Features:
• Open standard access – no user application changes required
• Flexible deployment – no interruption to DMF production
environment during ZWS deployment
• Tuneable data movement policies – to maximize use of ZWS
& other storage hardware
• Granular drive management including automated spin-down of
inactive individual disks
• Maximum power savings Increasing disk lifespan
• Automated data recoverability – silent data corruption
prevention and ‘in place’ data recovery
32. Migrate
Recall
e.g. by time, type, etc
Primary Storage
(POSIX)
• Online, high-performance disk
Nearline Fast-Mount
Cache
High capacity, low cost,
power-managed disk
Deep Storage
Object Store
Public Cloud
Tape
Entire namespace is in
Filesystem
Migrate file data transparently
(with invisible IO), leave
inodes
Recall on access or by
schedule
Filesystem IS the metadata
database
Transparency makes it easy
– data catalog and access in
same place
Data Management | Lustre HSM with DMF Core Concepts
33. Data Management | DMF Data Protection Strategy
33
3 copies
Advantage of 3 copies
of all data:
• Optimized use of storage HW
• High availability
• Elimination of backup
1 2 3
Performance
copy
Secure
copy
Disaster Recovery
copy
2
media
types
Advantage of keeping data on
two different media types:
• Fast data access
• Data retention
• Archive resilience
RAID, Flash, Disc,
Tape, Object &
ZWS
Tape or Cloud Object
1 2
1
copy
offsite
Advantage of keeping one
copy offsite:
• Lower power consumption
• Base for compliance
• Disaster recovery
Primary Data
Center
Offsite or Cloud
Storage
1
34. 34
Proven in production use for
over 20 years
• Data Management, Archive, Integrated
Backup, Validation and RepairAll-in-One
• All data appears online all the timeTransparent
• Policies leverage file attributes, define
multiple copies on different media.Policy-driven
DMF
Scalable Data
Management Fabric
• Policy-based Data Migration & HSM
• Parallel Architecture for High Throughput
• Active Data Validation and Repair
• Minimizes Storage Administrator Workload
Lowest cost per GB of
data with extremely high
levels of data durability
High-performance access
with very low storage &
operating costs
Highly scalable and resilient
for availability and disaster
recovery
Public/Private CloudZero Watt StorageTMTape Library Storage
Data Management | DMF Core Concepts
36. –High-Performance data migrations
–DMF Direct Archiving
–MAID storage target
–DMF Zero Watt Storage
–Elegant Archive Storage Migration Over Time
–Multi-Petabyte data migration with no user impact
–Trusted data protection
–Over 25 years preserving data
–Active user community
–DMF User Group (Feb 2017)http://hpc.csiro.au/users/dmfug/
Key Differentiators
36
Some names and brands may be claimed as the property of others
38. Data Management | Summary
38
HPC presents unique storage challenges
HPE has a robust and flexible set of HPC file systems
DMF data management ensures long term availability
HPC Business Unit can assist with sizing and design