A look at ESG concerns and agility needed to address pressures to transform energy organizations with decarbonization. Presented to Future Oil and Gas conference November 2021
Overview of the core elements of the alliance. Presented to enterprise customers at the Microsoft NorCal MTC on November 11th, 2016
Kevin McCauley
Red Hat
Bring cloud on premises with a kubernetes-native infrastructureAbhinav Joshi
This document introduces the concept of a Kubernetes-native infrastructure that allows organizations to run applications on-premise with a cloud-like experience. It discusses using Kubernetes to manage both applications and infrastructure resources like containers, VMs, and bare metal servers. This approach simplifies operations, accelerates application development, and provides a more cost efficient and secure way to deliver innovations on-premise compared to traditional virtualized data centers. Initial target use cases include developer clouds and latency-sensitive applications.
Cloud Foundry Summit 2015: Making the LeapVMware Tanzu
Speaker: Richard Seroter, CenturyLink
To learn more about Pivotal Cloud Foundry, visit http://www.pivotal.io/platform-as-a-service/pivotal-cloud-foundry.
IBM + REDHAT "Creating the World's Leading Hybrid Cloud Provider..."Gustavo Cuervo
1. IBM and Red Hat have a long partnership in open source technologies spanning over 20 years. They are leaders in hybrid cloud, multi-cloud, open source, security, and data solutions.
2. Together, IBM and Red Hat provide solutions across traditional, private and public cloud environments to help clients create cloud-native applications and drive portability across clouds.
3. IBM and Red Hat share common beliefs around innovation, containers, and the importance of open standards and hybrid/multi-cloud environments. Their partnership provides enterprises flexibility and choice in their cloud journeys.
PuppetConf 2017: Zero to Cloud- James Frederick, Dell EMCPuppet
Demands for IT efficiency and agility are impacting enterprises of all sizes and hybrid clouds are proving to be great enablers of IT Transformation. Join this session to hear how integrating Puppet automation into VMware-based cloud environments can deliver agility, standardization, and reduced operations costs. You’ll also hear how you can accelerate achieving these benefits with Dell EMC’s Enterprise Hybrid Cloud.
Platform Requirements for CI/CD Success—and the Enterprises Leading the WayVMware Tanzu
All enterprises want to increase the speed of software delivery to get new products to market faster. The means for achieving this is often through the practice of continuous integration/continuous delivery. But speed alone isn’t enough—teams also require the ability to pivot when conditions change. They must ensure their software is stable and reliable, and be able to roll out patches and other security measures quickly and at scale.
A cloud-native platform coupled with test-driven development and CI/CD practices can help make this a reality. In this webinar, 451 Research’s Jay Lyman presents the results of his research into cloud-native platform requirements for enterprise CI/CD and DevOps success. Pivotal’s James Ma joins Lyman to discuss best practices from DevOps teams charged with running and managing cloud-native platforms, including applying CI/CD to the platform itself.
Speakers: James Ma, Pivotal and Jay Lyman, 451 Research
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlexRobb Boyd
Find out how organizations like yours are deriving business value from the HyperFlex HCI solution. Join us for a deep dive and Q&A at the TechWiseTV workshop.
TechWiseTV Hyperflex 4.0 Episode: http://cs.co/9009EW2Td
The document discusses how GE's Predix platform can be used in healthcare to leverage big data analytics. It provides background on Prasanth Salla and his experience developing healthcare software. It then outlines opportunities in healthcare like using machine learning to improve outcomes and transitioning to outcomes-based payment models. The document promotes GE Predix as an industrial IoT platform and cloud that can be used to develop secure applications for collecting and analyzing device and sensor data in healthcare. It provides examples of Predix components and services and how they can enable use cases like remote patient monitoring.
1. What does Predix bring to the table?
2. How is it different to Cloud Foundry and IBM Bluemix?
3. Predix service catalog. Which services can set Predix apart?
4. Top use cases and apps
5. Likely scenarios of Predix evolution
Barriers to entry are collapsing as digital startups come out of nowhere to disrupt entire industries. In this session we will discuss the capabilities you need to deliver business innovation through software to market faster than your competitors.
Speaker: Faiz Parkar, Director EMEA GTM, Pivotal
This document discusses IBM's hybrid multicloud platform and digital transformation. Some key points:
- IBM's hybrid multicloud platform is founded on Red Hat technologies like Red Hat Enterprise Linux and Red Hat OpenShift which allow applications to be built once and deployed anywhere across public clouds, private clouds, and on-premises.
- The platform provides consistent management, security, and services across heterogeneous cloud environments from different vendors through an open, standards-based approach.
- A case study describes how Deutsche Bank used Red Hat solutions to build an application platform that streamlined development, improved efficiency, and allowed applications to be developed 2-3 weeks instead of 6-9 months.
Multicloud - Understanding Benefits. Obstacles, and Best ApproachesKenneth Hui
Presentation given at Gartner IT IOCS 2019. Defines multicloud and explains benefits, challenges and recommended practices. Original title was "Multi-Cloud is Mostly BS."
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Slides: Polyglot Persistence for the MongoDB, MySQL & PostgreSQL DBASeveralnines
Polyglot Persistence for the MongoDB, PostgreSQL & MySQL DBA
The introduction of DevOps in organisations has changed the development process, and perhaps introduced some challenges. Developers, in addition to their own preferred programming languages, also have their own preference for backend storage.The former is often referred to as polyglot languages and the latter as polyglot persistence.
Having multiple storage backends means your organization will become more agile on the development side and allows choice to the developers but it also imposes additional knowledge on the operations side. Extending your infrastructure from only MySQL, to deploying other storage backends like MongoDB and PostgreSQL, implies you have to also monitor, manage and scale them. As every storage backend excels at different use cases, this also means you have to reinvent the wheel for every one of them.
This webinar covers the four major operational challenges for MySQL, MongoDB & PostgreSQL:
Deployment
Management
Monitoring
Scaling
And how to deal with them
SPEAKER
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
This webinar is based upon the experience Art had while writing our How to become a ClusterControl DBA blog series and implementing multiple storage backends to ClusterControl. To view all the blogs of the ‘Become a ClusterControl DBA’ series visit: http://severalnines.com/blog-categories/clustercontrol
SEE the Cloud: Hans Timmerman - De cloud daalt neer op aardeTOPdesk
Rond de eeuwwisseling ontstonden de eerste clouddiensten; internetdiensten zoals Hotmail die ons via browsers toegang gaven tot centrale processing en dataomgevingen. Een soort mainframe van het internet, maar dan in de vorm van een service waar hardware en software niet meer belangrijk waren.
Tegenwoordig is de cloud een begrip en worden veel van onze informatiediensten bij centrale cloud providers afgehandeld. Echter met de komst van het Internet of Things, wordt het cloud operations model ook interessant voor decentrale en lokale informatiediensten. Kleine cloudomgevingen waar snel data moet worden verwerkt, waar directe actie nodig is, of waar data gewoonweg niet mag worden verplaatst.
We noemen dat fog computing, met een knipoog naar die hoge, verre clouds aan de hemel. De cloud als mist dicht om ons heen die soms zelfs een nevel kan zijn van allerhande kleine informatiediensten. In deze presentatie wordt dieper ingegaan op de ontwikkelingen op dit gebied en de relevantie ervan op maatschappij en de (huidige) IT-organisatie.
Applications need data, but the legacy approach of n-tiered application architecture doesn’t solve for today’s challenges. Developers aren’t empowered to build and iterate their code quickly without lengthy review processes from other teams. New data sources cannot be quickly adopted into application development cycles, and developers are not able to control their own requirements when it comes to data platforms.
Part of the challenge here is the existing relationship between two groups: developers and DBAs. Developers are trying to go faster, automating build/test/release cycles with CI/CD, and thrive on the autonomy provided by microservices architectures. DBAs are stewards of data protection, governance, and security. Both of these groups are critically important to running data platforms, but many organizations deal with high friction between these teams. As a result, applications get to market more slowly, and it takes longer for customers to see value.
What if we changed the orientation between developers and DBAs? What if developers consumed data products from data teams? In this session, Pivotal’s Dormain Drewitz and Solstice’s Mike Koleno will speak about:
- Product mindset and how balanced teams can reduce internal friction
- Creating data as a product to align with cloud-native application architectures, like microservices and serverless
- Getting started bringing lean principles into your data organization
- Balancing data usability with data protection, governance, and security
Presenter : Dormain Drewitz, Pivotal & Mike Koleno, Solstice
Bhadale group of companies Intel partner services catalogueVijayananda Mohire
The document provides an overview of services offered by Bhadale Group of Companies related to Intel partner solutions. Bhadale Group consists of two divisions - Bhadale IT Developers Pvt. Ltd which provides IT consultation, and Bhadale Engineering Developers Pvt. Ltd which provides multi-engineering services. The catalog lists 15 core service areas including architecture solutions, cloud solutions, data center modernization, AI/ML solutions, IoT solutions, and custom solutions. Contact details are provided for more information.
The document introduces an evolutionary event-driven architecture called the Enterprise Digital Transformation Platform (EDTP) for accelerating digital transformation. The EDTP is a 4-tier platform based on cloud, containers, microservices, events and streaming. It addresses challenges of data integration and decoupling through architectural concepts like event-driven design, microservices and templates. The EDTP provides full-stack deployment automation and microservice templating to accelerate development. Use cases from Toyota Financial Services are presented to demonstrate the EDTP's capabilities.
This is the abstract of my M.Tech thesis submitted as part of my final semester for award of my Master's degree with thesis. This highlights my research work in the area of multi agent system project implementation in real projects, case studies and a general project package as a reference framework used as a canonical model.
Webinar presented live on May 29, 2018
The Cloud Native Computing Foundation builds sustainable ecosystems and fosters a community around a constellation of projects that orchestrate containers as part of a microservices architecture. CNCF serves as the vendor-neutral home for many of the fastest-growing projects on GitHub, including Kubernetes, Prometheus and Envoy, fostering collaboration between the industry’s top developers, end users, and vendors.
In this webinar, Dan Kohn, CNCF Executive Director, will present:
- A brief overview of CNCF
- Evolving monolithic applications to microservices on Kubernetes
- Why Continuous Integration is the most important part of the cloud native architecture
Watch the video: http://www.cloud-council.org/webinars/kubernetes-and-container-technologies-from-cncf.htm
Bhadale group of companies technology ecosystem for CPSoSVijayananda Mohire
This is our ecosystem and tool chain used for Systems Engineering. We offer various design, development and testing services for various systems right from a home appliance to flight instruments and avionic systems
Bhadale group of companies on-premise services catalogueVijayananda Mohire
This is our offering for the On-premise cloud services. We have a wide range of services that assist clients in adopting cloud technologies right at their premise.
This document discusses disruptive technologies including blockchain, big data platforms, and data-driven intelligence and analytics. It provides an overview of blockchain technology including how it works through distributed ledgers and mining. It also discusses challenges of big data including volume, velocity, and variety, and different platform options to address these. Finally, it covers machine learning, artificial intelligence, and deep learning applications as well as challenges in applying these techniques.
1. Companies are realizing that moving to the cloud is inevitable and need to determine their cloud strategy by assessing which applications are cloud-feasible and the best deployment model.
2. Determining cloud feasibility involves analyzing factors like the application's criticality, dependencies, and technical compatibility with cloud platforms. Feasible applications then need deeper analysis to identify the optimal migration path.
3. Developing an effective cloud strategy requires structuring large migrations, defining target architectures and operating models, and taking a thoughtful approach to planning the identification, movement, and management of applications in the cloud.
Leveraging IoT as part of your digital transformationJohn Archer
Review of approaches for Edge computing architecture with emphasis on improved security for container workloads collecting telemetry from Industrial IoT environments
Εταιρική Παρουσίαση: Ανδρέας Τσαγκάρης, Chief Technology Officer, Performance Technologies
Τίτλος: «OpenShift and IBM Cloud Paks on Power for Digital transformation»
In this deck from the 2019 UK HPC Conference, Glyn Bowden from HPE presents: The Eco-System of AI and How to Use It.
"This presentation walks through HPE's current view on AI applications, where it is driving outcomes and innovation, and where the challenges lay. We look at the eco-system that sits around an AI project and look at ways this can impact the success of the endeavor."
Watch the video: https://wp.me/p3RLHQ-kVS
Learn more: https://www.hpe.com/us/en/solutions/artificial-intelligence.html
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses how GE's Predix platform can be used in healthcare to leverage big data analytics. It provides background on Prasanth Salla and his experience developing healthcare software. It then outlines opportunities in healthcare like using machine learning to improve outcomes and transitioning to outcomes-based payment models. The document promotes GE Predix as an industrial IoT platform and cloud that can be used to develop secure applications for collecting and analyzing device and sensor data in healthcare. It provides examples of Predix components and services and how they can enable use cases like remote patient monitoring.
1. What does Predix bring to the table?
2. How is it different to Cloud Foundry and IBM Bluemix?
3. Predix service catalog. Which services can set Predix apart?
4. Top use cases and apps
5. Likely scenarios of Predix evolution
Barriers to entry are collapsing as digital startups come out of nowhere to disrupt entire industries. In this session we will discuss the capabilities you need to deliver business innovation through software to market faster than your competitors.
Speaker: Faiz Parkar, Director EMEA GTM, Pivotal
This document discusses IBM's hybrid multicloud platform and digital transformation. Some key points:
- IBM's hybrid multicloud platform is founded on Red Hat technologies like Red Hat Enterprise Linux and Red Hat OpenShift which allow applications to be built once and deployed anywhere across public clouds, private clouds, and on-premises.
- The platform provides consistent management, security, and services across heterogeneous cloud environments from different vendors through an open, standards-based approach.
- A case study describes how Deutsche Bank used Red Hat solutions to build an application platform that streamlined development, improved efficiency, and allowed applications to be developed 2-3 weeks instead of 6-9 months.
Multicloud - Understanding Benefits. Obstacles, and Best ApproachesKenneth Hui
Presentation given at Gartner IT IOCS 2019. Defines multicloud and explains benefits, challenges and recommended practices. Original title was "Multi-Cloud is Mostly BS."
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Slides: Polyglot Persistence for the MongoDB, MySQL & PostgreSQL DBASeveralnines
Polyglot Persistence for the MongoDB, PostgreSQL & MySQL DBA
The introduction of DevOps in organisations has changed the development process, and perhaps introduced some challenges. Developers, in addition to their own preferred programming languages, also have their own preference for backend storage.The former is often referred to as polyglot languages and the latter as polyglot persistence.
Having multiple storage backends means your organization will become more agile on the development side and allows choice to the developers but it also imposes additional knowledge on the operations side. Extending your infrastructure from only MySQL, to deploying other storage backends like MongoDB and PostgreSQL, implies you have to also monitor, manage and scale them. As every storage backend excels at different use cases, this also means you have to reinvent the wheel for every one of them.
This webinar covers the four major operational challenges for MySQL, MongoDB & PostgreSQL:
Deployment
Management
Monitoring
Scaling
And how to deal with them
SPEAKER
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
This webinar is based upon the experience Art had while writing our How to become a ClusterControl DBA blog series and implementing multiple storage backends to ClusterControl. To view all the blogs of the ‘Become a ClusterControl DBA’ series visit: http://severalnines.com/blog-categories/clustercontrol
SEE the Cloud: Hans Timmerman - De cloud daalt neer op aardeTOPdesk
Rond de eeuwwisseling ontstonden de eerste clouddiensten; internetdiensten zoals Hotmail die ons via browsers toegang gaven tot centrale processing en dataomgevingen. Een soort mainframe van het internet, maar dan in de vorm van een service waar hardware en software niet meer belangrijk waren.
Tegenwoordig is de cloud een begrip en worden veel van onze informatiediensten bij centrale cloud providers afgehandeld. Echter met de komst van het Internet of Things, wordt het cloud operations model ook interessant voor decentrale en lokale informatiediensten. Kleine cloudomgevingen waar snel data moet worden verwerkt, waar directe actie nodig is, of waar data gewoonweg niet mag worden verplaatst.
We noemen dat fog computing, met een knipoog naar die hoge, verre clouds aan de hemel. De cloud als mist dicht om ons heen die soms zelfs een nevel kan zijn van allerhande kleine informatiediensten. In deze presentatie wordt dieper ingegaan op de ontwikkelingen op dit gebied en de relevantie ervan op maatschappij en de (huidige) IT-organisatie.
Applications need data, but the legacy approach of n-tiered application architecture doesn’t solve for today’s challenges. Developers aren’t empowered to build and iterate their code quickly without lengthy review processes from other teams. New data sources cannot be quickly adopted into application development cycles, and developers are not able to control their own requirements when it comes to data platforms.
Part of the challenge here is the existing relationship between two groups: developers and DBAs. Developers are trying to go faster, automating build/test/release cycles with CI/CD, and thrive on the autonomy provided by microservices architectures. DBAs are stewards of data protection, governance, and security. Both of these groups are critically important to running data platforms, but many organizations deal with high friction between these teams. As a result, applications get to market more slowly, and it takes longer for customers to see value.
What if we changed the orientation between developers and DBAs? What if developers consumed data products from data teams? In this session, Pivotal’s Dormain Drewitz and Solstice’s Mike Koleno will speak about:
- Product mindset and how balanced teams can reduce internal friction
- Creating data as a product to align with cloud-native application architectures, like microservices and serverless
- Getting started bringing lean principles into your data organization
- Balancing data usability with data protection, governance, and security
Presenter : Dormain Drewitz, Pivotal & Mike Koleno, Solstice
Bhadale group of companies Intel partner services catalogueVijayananda Mohire
The document provides an overview of services offered by Bhadale Group of Companies related to Intel partner solutions. Bhadale Group consists of two divisions - Bhadale IT Developers Pvt. Ltd which provides IT consultation, and Bhadale Engineering Developers Pvt. Ltd which provides multi-engineering services. The catalog lists 15 core service areas including architecture solutions, cloud solutions, data center modernization, AI/ML solutions, IoT solutions, and custom solutions. Contact details are provided for more information.
The document introduces an evolutionary event-driven architecture called the Enterprise Digital Transformation Platform (EDTP) for accelerating digital transformation. The EDTP is a 4-tier platform based on cloud, containers, microservices, events and streaming. It addresses challenges of data integration and decoupling through architectural concepts like event-driven design, microservices and templates. The EDTP provides full-stack deployment automation and microservice templating to accelerate development. Use cases from Toyota Financial Services are presented to demonstrate the EDTP's capabilities.
This is the abstract of my M.Tech thesis submitted as part of my final semester for award of my Master's degree with thesis. This highlights my research work in the area of multi agent system project implementation in real projects, case studies and a general project package as a reference framework used as a canonical model.
Webinar presented live on May 29, 2018
The Cloud Native Computing Foundation builds sustainable ecosystems and fosters a community around a constellation of projects that orchestrate containers as part of a microservices architecture. CNCF serves as the vendor-neutral home for many of the fastest-growing projects on GitHub, including Kubernetes, Prometheus and Envoy, fostering collaboration between the industry’s top developers, end users, and vendors.
In this webinar, Dan Kohn, CNCF Executive Director, will present:
- A brief overview of CNCF
- Evolving monolithic applications to microservices on Kubernetes
- Why Continuous Integration is the most important part of the cloud native architecture
Watch the video: http://www.cloud-council.org/webinars/kubernetes-and-container-technologies-from-cncf.htm
Bhadale group of companies technology ecosystem for CPSoSVijayananda Mohire
This is our ecosystem and tool chain used for Systems Engineering. We offer various design, development and testing services for various systems right from a home appliance to flight instruments and avionic systems
Bhadale group of companies on-premise services catalogueVijayananda Mohire
This is our offering for the On-premise cloud services. We have a wide range of services that assist clients in adopting cloud technologies right at their premise.
This document discusses disruptive technologies including blockchain, big data platforms, and data-driven intelligence and analytics. It provides an overview of blockchain technology including how it works through distributed ledgers and mining. It also discusses challenges of big data including volume, velocity, and variety, and different platform options to address these. Finally, it covers machine learning, artificial intelligence, and deep learning applications as well as challenges in applying these techniques.
1. Companies are realizing that moving to the cloud is inevitable and need to determine their cloud strategy by assessing which applications are cloud-feasible and the best deployment model.
2. Determining cloud feasibility involves analyzing factors like the application's criticality, dependencies, and technical compatibility with cloud platforms. Feasible applications then need deeper analysis to identify the optimal migration path.
3. Developing an effective cloud strategy requires structuring large migrations, defining target architectures and operating models, and taking a thoughtful approach to planning the identification, movement, and management of applications in the cloud.
Leveraging IoT as part of your digital transformationJohn Archer
Review of approaches for Edge computing architecture with emphasis on improved security for container workloads collecting telemetry from Industrial IoT environments
Εταιρική Παρουσίαση: Ανδρέας Τσαγκάρης, Chief Technology Officer, Performance Technologies
Τίτλος: «OpenShift and IBM Cloud Paks on Power for Digital transformation»
In this deck from the 2019 UK HPC Conference, Glyn Bowden from HPE presents: The Eco-System of AI and How to Use It.
"This presentation walks through HPE's current view on AI applications, where it is driving outcomes and innovation, and where the challenges lay. We look at the eco-system that sits around an AI project and look at ways this can impact the success of the endeavor."
Watch the video: https://wp.me/p3RLHQ-kVS
Learn more: https://www.hpe.com/us/en/solutions/artificial-intelligence.html
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Developers are constantly seeking an easier and faster way to build and ship new and modern software features and capabilities based on the latest and greatest cloud APIs. DevOps teams and IT professionals, on the other hand, face the challenge of controlling security, compliance, performance, scalability, and availability of the underlying infrastructure environments. Can both of these initiatives be achieved?
These slides based on the webinar featuring Torsten Volk, research director at leading IT analyst firm EMA, highlight how to bridge the gap between these two key initiatives and transform corporate IT into an accelerator for digital transformation.
The document discusses digital transformation with Red Hat hybrid cloud. It begins by outlining some common business pain points and challenges around technical debt, digitalization, time to market, and return on investment. It then covers key technology trends like cloud-native applications, AI/ML, IoT, blockchain, and more. The rest of the document focuses on how Red Hat's portfolio, including OpenShift and middleware solutions, can help customers address these trends and challenges as part of their digital transformation journey by enabling new application development approaches, modernizing infrastructure, and optimizing processes.
Innovating to Create a Brighter Future for AI, HPC, and Big Datainside-BigData.com
In this deck from the DDN User Group at ISC 2019, Alex Bouzari from DDN presents: Innovating to Create a Brighter Future for AI, HPC, and Big Data.
"In this rapidly changing landscape of HPC, DDN brings fresh innovation with the stability and support experience you need. Stay in front of your challenges with the most reliable long term partner in data at scale."
Watch the video: https://wp.me/p3RLHQ-kxm
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Secure, Strengthen, Automate, and Scale Modern Workloads with Red Hat & NGINXNGINX, Inc.
Learn how to support your application delivery – no matter where you are on the journey from monolithic apps to microservices.
Join this webinar to learn:
- About important considerations around digital innovation in FSI
- How to leverage automation and Ansible to deliver apps faster
- About keys to delivering modern apps securely and reliably anywhere
- How OpenShift takes the complexity out of containers
https://www.nginx.com/resources/webinars/secure-strengthen-automate-scale-modern-workloads-with-red-hat-nginx/
The document discusses embedding machine learning in business processes using the example of baking cakes. It notes that while bakers follow exact recipes and processes, the results are not always perfect due to various factors. It then discusses how manufacturers are "data rich but information poor" as they cannot derive meaningful insights from their operational data. The document advocates generating "actionable intelligence" through deep analysis of production data to determine the root causes of issues like cracked cakes, rather than just reporting what problems occurred. This would help manufacturers diagnose and address process flaws more precisely.
FlexPod Select for Hadoop is a pre-validated solution from Cisco and NetApp that provides an enterprise-class architecture for deploying Apache Hadoop workloads at scale. The solution includes Cisco UCS servers and fabric interconnects for compute, NetApp storage arrays, and Cloudera's Distribution of Apache Hadoop for the software stack. It offers benefits like high performance, reliability, scalability, simplified management, and reduced risk for organizations running business-critical Hadoop workloads.
Cloud Computing is a term used to refer to a model of network computing where a program or application runs on a connected server or servers rather than a local computing device such as a PC, tablet or Smartphone.
www.ipsrglobal.com
DevConf.US 2022 - Exploring Open Source Edge Success at ScaleEric D. Schabell
You've heard of large scale open source architectures, but have you ever wanted to take a serious look at real life enterprise edge implementations that scale? This session takes attendees on a tour of multiple use cases for enterprise challenges on the edge with integration, telco, healthcare, manufacturing, and much more. Not only are these architectures interesting, but they are successful real life implementations featuring open source technologies and power many of your own edge experiences.
The attendee departs this session with a working knowledge of how to map general open source technologies to their own edge solutions. Material covered is available freely online and attendees can use these solutions as starting points for aligning to their own solution architectures. Join us for an hour of power as we talk architecture shop!
Data Driven Advanced Analytics using Denodo Platform on AWSDenodo
The document discusses challenges with data-driven cloud modernization and how the Denodo platform can help address them. It outlines Denodo's capabilities like universal connectivity, data services APIs, security and governance features. Example use cases are presented around real-time analytics, centralized access control and transitioning to the cloud. Key benefits of the Denodo data virtualization approach are that it provides a logical view of data across sources and enables self-service analytics while reducing costs and IT dependencies.
How to reinvent your organization in an iterative and pragmatic way? This is the result of using our digital toolbox. It allows you to transform your business model, expand your ecosystem by setting up your digital platform. This reinvention is also supported by the adaptation of your governance allowing you to innovate while guaranteeing the performance of your organization. For any information / suggestion / collaboration - [email protected]
Comment réinventer votre organisation de manière itérative et pragmatique ? C'est le résultat de l'utilisation de notre boîte à outils digitale. Elle vous permet de transformer votre modèle métier, d'étendre votre écosystème en mettant en place votre plateforme digitale. Cette réinvention est également supportée par l'adaptation de votre gouvernance vous permettant d'innover tout en garantissant la performance de votre organisation. Pour toute information / suggestion / collaboration - [email protected]
Watch full webinar here: https://bit.ly/3mdj9i7
You will often hear that "data is the new gold"? In this context, data management is one of the areas that has received more attention from the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar, we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Watch this on-demand webinar as we cover:
- The most interesting trends in data management
- How to build a data fabric architecture?
- How to manage your data integration strategy in the new hybrid world
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of voice computing in future data analytic
apidays LIVE Australia 2021 - A cloud-native approach for open banking in act...apidays
apidays LIVE Australia 2021 - Accelerating Digital
September 15 & 16, 2021
A cloud-native approach for open banking in action
Rafael Marins, Principal Product Marketing Manager at Red Hat
IDC interviewed nine organizations that are using Red Hat OpenShift as their primary
application development platform. These organizations reported that OpenShift helps
them deliver timely and compelling applications and features across their complex and
heterogeneous IT environments and supports key IT initiatives such as containerization,
microservices, and cloud migration strategies.
IDC interviewed nine organizations that are using Red Hat OpenShift as their primary application development platform. These organizations reported that OpenShift helps them deliver timely and compelling applications and features across their complex and heterogeneous IT environments and supports key IT initiatives such as containerization, microservices, and cloud migration strategies.
This document provides an overview of open source data warehousing and business intelligence (DW/BI). It defines cloud computing and explains how open DW consists of pre-designed data warehouse architectures that are free to use. Open DW reduces costs and risks by shortening design and development time. While the architectures are free, vendors charge for services like customization, support, and maintenance. The document discusses the need for and benefits of open DW/BI, including faster deployment, lower costs, and mitigated risks through rapid development. It also outlines some popular open source databases, tools, and vendors in this space.
Enabling Enterprise-wide OT Data access with Matrikon Data Broker.pdfJohn Archer
Highlights on new partnership between Red Hat and Matrikon for supporting OPC-UA on Red Hat edge infrastructure including bare metal, VMs and containers deployments for Matrikon Data Broker
Delivering Agile Data Science on Openshift - Red Hat Summit 2019John Archer
Audrey Reznik, Data Scientist from ExxonMobil and John Archer, Red Hat Solution Architect present on how to use Openshift to enable and create value to data science teams and improve their agility and improve collaboration for larger organizations.
Single View of Well, Production and AssetsJohn Archer
SINGLE VIEW OF WELL, PRODUCTION AND ASSETS
Deliver a complete view of G&G, Well Header, Volumes, transactional data
Reduce Data Movement
Reduce Load on Data sources with intelligent caching
Aggregated single view of complex and legacy data sources
This document provides an overview and agenda for a developer 2 developer webcast series on microservice architecture and container technologies. It includes details on upcoming webcasts in March and April 2017 focused on microservice architecture, Azure container service, Pivotal cloud foundry, and RedHat OpenShift. The document also advertises a webcast on RedHat OpenShift presented by John Archer on containerization with OpenShift and how it enables modern application development.
Field development and operational optimization for unconventionalsJohn Archer
How to address high operational demands for drilling and fracking wells.
How can improved planning address inefficiencies in unconventional fields.
Agile end-to-end service value chains are needed from Edge Computing scenarios to newly formed Data Science teams.
Making the Data Science teams efforts operational is an industry challenge.
Real-time operations of all assets are needed for improved margins in the industry.
Improve margins in shale oil fields with sand frac logistics improvements.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Extending open source and hybrid cloud to drive OT transformation - Future Oil & Gas conference
1. 1
Extending open source and
hybrid cloud to drive OT
transformation
John Archer
Senior Principal BDM - AI/Edge
[email protected]
Future Oil & Gas
Nov 16-17, 2021
7. Action Plan? How to prioritize?
7
Condition Models, Preventative Maintenance, Consumption trends,
Feedstocks
Border Taxes, Pass the Carbon Tax down to customers
Change business lines with Renewables, DER, EV, Storage, Biofuels,
Hydrogen, Ammonia
What is my board thinking? End to end Process impacts
Today this is all in silos, difficult to analysis and culturally and/or
security sensitive in many organizations
8. Red Hat OpenShift: Innovation without limitation
8
8
Cloud-native
Internet of things
Digital
transformation
Containers
DevOps
Open organization
Open source communities
Kubernetes
Hybrid cloud
Machine learning
AI
Innovation
Security
Automation
business innovation
Every organization in every geography and in
every industry can innovate and create more
customer value and differentiation with
open source technologies and an open culture.
Big ideas drive...
5G
9. Red Hat OpenShift: Delivering innovation without limitation
But innovating
isn’t always easy
9
Innovation
Innovate at speed.
Flexibility
Flexibility to adapt to market changes.
Growth
Grow new customer experiences
and lines of business.
10. 10
Red Hat industrial edge
Open Source Initiatives around Industrial Edge Computing
Collect and focus the best ideas
A global alliance solving critical
manufacturing challenges
LF Energy is an open source
foundation focused on the power
systems sector
A standards-based, open, secure, and
interoperable process control architecture
Open Source data platform
for the energy industry
The interoperability standard for
secure and reliable information
exchange in industrial automation
Powers the world’s leading
commercial IoT solutions
A framework of open source software
components to build platforms that
support the development of Smart
Solutions faster, easier and cheaper
The cross-vendor open source
connectivity solution for Smart
Factories and Smart Products
We share the vision of a continuous data
exchange for all contributors along the
automotive value chain
11. Data in a large enterprise
Data silos
Slow data access puts project at risk
Legacy tech & poor automation
Error-prone, manual process unacceptable in a
modern event-driven environment
Lack of cross-team collaboration
Demands person-in-the-loop with institutional
knowledge. Analysts assemble personal, stale
datasets
No Data Governance process
Without a centralized understanding of company
assets, few models are capable of deployment
Energy organization data challenges
Months of effort Months of effort Months of effort
Sources
Subject Matter
Experts
Data Scientists
Geologists
Geophysicists Models
?
SEISMIC
WELLBORE
FLUIDS
CORE
ANALYSIS
PIPELINE
Business has
a question
Business gets
an answer
80%
12. Data in a large enterprise
Silos still exist
Data Warehouses & Lakes leave data in original form.
Little change in time to result
Still need SMEs, manual process, no governance, etc.
Consumers handle data
preparation
Data consumers still responsible for transformations
Lacking business-based data
model
Data should be transformed into the form the
business needs and understands. Automation
required forces global understanding as teams self-
service based on their needs
Consolidating data isn’t sufficient
Sources Warehouse / Lake Data Solutions Consumers
APIs
Intelligent
Applications
Events
13. 13
Red Hat industrial edge
The Need
“As shop floor IT person, I want to get rid of all the different bespoke and customized
hardware solutions for PLCs, SCADA, HMI, MES etc. They are expensive, inflexible and
hard to maintain.
I want a single unified software platform based on standard hardware, so I can easily add
new features and functions defined purely in software, even from different vendors.
That would help me to improve the efficiency and agility of my plant”
14. 14
Red Hat industrial edge
Our focus use cases
● Standardized distributed operations
● Modernized application environments
(OT and IT)
● Modernized network infrastructure
● Automation/integration of
monitoring & control processes
● Predictive analytics
● Production optimization
● Supply chain optimization
Digital Enterprise Edge
Extend cloud/data center approaches
to new contexts / distributed locations / OT
Operations Edge
Leverage edge/AI/serverless
to transform OT environments
Industrial
Edge
● Aggregation, access and far edge
● Manages a network for others
○ Telecommunications Service Providers
○ Creates reliable, low latency networks
Provider Edge
Network and compute specifically
to support remote/mobile use cases
Provider
Edge
Enterprise
Edge
● Vehicle edge (onboard & offboard)
○ In-vehicle OS
○ Autonomous driving,
Infotainment up to ASIL-B
○ Quality management
Connected Product Edge
Create new offerings or customer/
partner engagement models
Vehicle
Edge
15. 15
Central data center
Cluster management and application
deployment
Kubernetes node
control
Regional data center
Edge
CONFIDENTIAL designator
Single node
edge servers
Low bandwidth or
disconnected sites.
C W
Site 3
W
Site 2
C
C W
Site 1
Remote worker
nodes
Environments that are
space constrained
3 Node Clusters
Small footprint with
high availability
Legend:
C: Control nodes
W: Worker nodes
Red Hat Edge Topologies
16. 16
S W
Edge clusters
(3+ node HA)
S
W
Remote
worker nodes
S W
Single node
edge servers
Small footprint
device edge
An image-based deployment
option of RHEL, that includes
transactional OS updates,
intelligent OS rollbacks and
intended for, but not limited
to, containerized applications
Red Hat OpenShift
deployment on a single
node (supervisor + worker)
with resources to run a full
Kubernetes cluster as well
as application workloads.
Red Hat OpenShift
supervisors reside in a
central location, with reliably-
connected workers distributed
at edge sites sharing a control
plane.
Red Hat OpenShift
supervisors and workers
reside on the same node.
High Availability (HA) setup
with just 3 servers.
Our edge platforms
Consistent operations at scale
17. 17
Overview of Red Hat OpenShift Data Science
Key features of Red Hat OpenShift Data Science
Combines Red Hat components, open source
software, and ISV certified software available
on Red Hat Marketplace
Increased capabilities/collaboration
Model outputs are hosted on the Red Hat
OpenShift managed service or exported for
integration into an intelligent application
Rapid experimentation use cases
Available on Red Hat OpenShift Dedicated
(AWS) and Red Hat OpenShift Service on AWS
Cloud Service
Provides data scientists and intelligent
application developers the ability to build, train,
and deploy ML models
Core data science workflow
Addressing AI/ML experimentation and integration use cases on a managed platform
18. And the services and partners to guide you to success
18
RED HAT OPEN INNOVATION LABS
RED HAT CONTAINER ADOPTION PROGRAM
CATALYZE INNOVATION
IMMERSE YOUR TEAM
EXPERIMENT
Rapidly build prototypes,
do DevOps, and be agile.
Bring modern application
development back to your
team.
Work side by side with experts
in a residency-style engagement.
FRAMEWORK FOR SUCCESSFUL CONTAINER
ADOPTION AND I.T. TRANSFORMATION
Mentoring, training, and side-by-side collaboration
SYSTEM INTEGRATORS
Or work with our ecosystem of certified systems integrators, including…
Red Hat OpenShift: Delivering innovation without limitation
19. 19
Lots of data is collected,
but finding and preparing
the right data across
multitude of sources with
varying quality is difficult
Readily usable data
lacking
Lack of key skills make it
difficult to find and secure
talent to maintain
operations
Talent
shortage
No rapid availability of
infrastructure and software
tools slows data scientists
and developers
Unavailability of
infrastructure & software
Unable to implement
quickly due to slow,
manual and siloed
operations
Lack of collaboration
across teams
Slow CPU
processing
Data sets continue to
increase in size but CPUs
are not getting faster and
not able to parallelize
processes well
AI/ML Key Execution Challenges
20. 20
Overview of Red Hat OpenShift Data Science
Our approach to AI/ML
Data as the foundation
Represents a workload
requirement for our
platforms across
hybrid cloud.
Applicable to Red Hat’s existing
core business in order to increase
open source development and
production efficiency.
Valuable as specific services
and product capabilities,
providing an intelligent
platform experience.
Lets customers build
intelligent apps using
Red Hat products and our
broader partner ecosystem.
Hybrid cloud Open source efficiency Intelligent platforms Intelligent apps
21. 21
Overview of Red Hat OpenShift Data Science
Depth and scale without lock-in
Complement common data science
tools in Red Hat OpenShift Data
Science with other Red Hat products
and cloud services
Partner ecosystem
Red Hat portfolio and services
Access specialized capabilities by
adding certified ISV ecosystem
products and services from
Red Hat Marketplace
Managed cloud platform
Deployed on Red Hat OpenShift and
managed on Amazon Web Services
providing access to compute and
accelerators based on your workload
Capabilities delivered through the combination of Red Hat and partner ecosystem
22. 22
Edge is bringing transformation to operational technology
Red Hat industrial edge
OT
Software-defined
everything
▸ Real-world, real-time interaction
▸ Convergence of planning & execution
▸ Implementation of data-driven insights
▸ Integration of formerly closed systems
IT
Software-defined
platforms
▸ Standard, scalable hardware
▸ Cloud-native applications
▸ Flexibility and agility
▸ Convergence of data platforms
23. RED HAT+IBM CONFIDENTIAL. For internal use only.
100+ Red Hat OpenShift certified operators
Red Hat Marketplace
Application Runtimes
Customer Code
{ | }
AI / ML
Databases & Big Data
Networking
Security
Monitoring & Logging
DevOps Tools
Storage
24. 24
Edge computing simplified deployment
Validated Patterns : Simplifying the creation of edge stacks
Bringing the Red Hat portfolio and ecosystem together - from services to the infrastructure
Config as code From POC to production
Open for collaboration
Highly reproducible
Go beyond documentation using
GitOps process to simplify deployment
So that you can scale out your
deployments with consistency
Ensure your teams are ready
to operate at scale
Anyone can suggest improvements,
contribute to it
#6: If Scope 3 emissions have gained such attention, it is partly due to its massive downstream impact in the Automotive and the Oil & Gas industries. In the Automotive sector, Scope 3 emissions have a considerable influence, as it accounts for 95% of the total induced emissions. In the Oil & Gas industry, Scope 3 alone represents 85% of emissions [1] of the industry. Shell, for instance, has 90% of its emissions stemming from its supply chain and the use of its products.
#7: Is your Data Trustworthy?
Need to understand the ‘lineage’ of the data.
You need to ‘label’ your data so that you know which data you have used
For warehouse and data lakes you need to keep the timeliness of data in mind
Data Gravity - Some countries will not let data out of the country (need Hybrid solution)
#9: Intro - Business innovation is driven by big idea:
Wow - what an incredible time we live in. What an exciting time to be alive!
Business is moving faster than ever before. Today, we can do things we could only dream of a few years ago.
Technology, open source communities and new ways of collaboration are driving business innovation
No longer are we looking at startups and Web 2.0 companies like Facebook, Uber and Airbnb for inspiration as to what innovation looks like.
Today, every organization in every geography and any industry can innovate, create more customer value and differentiation and compete on an equal playing field.
And with Red Hat OpenShift, we’re building on our heritage of Red Hat Enterprise Linux to provide you with a platform that enables your organization to innovate faster.
But why is being able to move faster and innovate so important?
#10: So the question comes back to you… Do you need to deliver solutions faster? Will delivering solutions faster help your organization innovate and exceed its goals?
<Let customer talk>
#11: See also here:
https://docs.google.com/presentation/d/1kCQJs0GaYFvmQv1RPov8yUEeNmyfE0tDUc2Aj2hZ8AU/edit#slide=id.gd93df9f22e_0_0
#12: Data ecosystems are becoming more complex, especially as cloud-based data platforms are added to the mix
This means that the process by which the business gets answers to its questions is also becoming more complex
In a modern data ecosystem, there is massive amounts of data sitting in a variety of locations and formats, like database, data lakes and warehouses, both on prem and in the cloud.
Worse yet, this data can be silod. Adding to the silos, there could be a lack of cross-team collaboration
For example, NOC assets may be looked after by different groups, and are therefore stored in different locations. You may need to obtain permission to access the data you are interested in and services of that team’s SME for help in obtaining the data you want.
When data is pulled out of a silo, Legacy tech and poor automation may cause error prone data.
In order to effectively analyze the data, it needs to put it in a common format automatically and be subject to Data Governance:
Data governance (DG) is the process of managing the availability, usability, integrity and security of the data in enterprise systems, based on internal data standards and policies that also control data usage.
Effective data governance ensures that data is consistent and trustworthy and doesn't get misused.
It's increasingly critical as organizations face new data privacy regulations and rely more and more on data analytics to help optimize operations and drive business decision-making.
This requires you to consolidate it into a single location by moving and/or copying it into that location. (address why consolidating the data into a single location may not be a good idea, in next slide)
All these steps are extremely time consuming, it can get quite expensive, and it adds 0 value while also increasing security risks by having multiple copies of your data laying around.
Ultimately these steps drive down your ability to provide insights at the speed of business.
It is no surprise that 80% of an enterprise’s time is spent on making the data available for analysis - while 20% is spent on finding the answers to their questions
Let’s address why consolidating the data into a single location is not a good idea
#13: Consolidation is extremely time consuming and isn’t sufficient
Silos still exist
Data Warehouses & Lakes leave data in original form.
Little change in time to result
Still need SMEs, manual process, no governance, etc.
Consumers handle data preparation
Data consumers still responsible for transformations
This can be dangerous, as customers may choose metric system instead of an imperial system for the data. If the organization uses imperial system and bases its calculations for volumes then costly errors will be made when the customers go ahead and use their model/analytics on other data sets within the organization.
Lacking business-based data model
At the end of the day, the Data should be transformed into the form the business needs and understands.
Requiring automation forces global understanding as teams now self-service based on their needs
All these items seem to point that getting good data in a timely manner is impossible. It’s not, because the current time we have is right for changing the way we store/access/gather and prepare data due to a number of factors:
Cloud computing (public & onPrem)
Usage of Open source technologies such as containers
Edge computing
Programming capabilities increased
Standards adoption
Most important of all we have O&G C-suite level acceptance
With that let’s look at how we can consume data using some of these factors
#15: Enterprise Edge: horizontal stuff, usable from IT / OT, not specific to
Operations Edge: vertical, OT specific, Industrial specific
Provider Edge: Telco Specific, Private 5G solutions
Vehicle Edge: rather Automotive specific, more details here: More detail on Vehicle edge: https://docs.google.com/presentation/d/1Fc4-bWCxsSxAG8DWs18T57sm-KA0qrlRDCrYHslp6TE/edit#slide=id.gee4169c65d_0_110
#16: Single Node now joins our previously announced 3 Node clusters and Remote worker nodes.
3 Node clusters for sites that require high availability, but in a smaller, 3 node footprint
Remote worker nodes where only worker nodes are in smaller edge locations while the controller nodes are in larger sites like regional data centers
Single node, which will provide both software high availability* and a smaller footprint - this is currently scheduled to be available in the second half of 2021
* if a container fails, kubernetes is able to restart it. Obviously hardware failures are not covered when you are running on a single server.if a container fails, k8s will relaunch itif a container fails, k8s will relaunch it
#19: And Red Hat and its system integrator partners are there to help you on every step of the journey, from the culture change and skills development needed to move to cloud native development, to the modernization of existing applications to containers, to optimizing processes for developers and IT
#20: Challenges
Data science is transformational, but its potential is being limited by five key things. It we solve these things, we can get better insights that translate into business value.
Organizations now have access to huge amounts of data, and it is growing exponentially. There is so much data that it’s next to impossible to process all of it. Making matters words, it’s inconsistent--it’s from different sources, different time periods, in different formats.
The end of Moore’s law means that CPUs aren’t just automatically getting significantly faster year after year. Popular data science tools are CPU-constrained, making users sit through long periods of processing time. This is exasperated by the flood of incoming data making data sets larger than ever.
The popular data science tools are spread out among dozens of software repositories, many of them are open source and revised frequently, and it’s very challenging to find the right versions that will all work together.
#23: The next disruptive evolution in technology is not about new companies disrupting traditional incumbents—it’s about traditional incumbents in “old fashioned industries” using technology to connect their preexisting infrastructures to create increased efficiency.
Red Hat has traditionally served IT organizations and the journey that they have been on for the last decade-plus as software-defined platforms have become prevalent is now coming to OT, which opens up even more potential value, as planning and execution systems converge and formerly closed systems get replaced by open architectures designed to support data-driven insights.
#25: Red Hat’s edge computing Validated Patterns are repositories of configuration templates in the form of Kubernetes manifests that describe an edge computing stack fully declaratively and comprehensively; from its services down to the supporting infrastructure. Validated Patterns facilitate complex, highly reproducible deployments and are ideal for operating these deployments at scale using GitOps operational practices.
Use a GitOps model to deliver the Pattern as code
Use as a POC, modified to fit a particular need that you can evolve into a real deployment.
Highly reproducible - great for operating at scale
Open for collaboration, so anyone can suggest improvements, contribute to them