Your easy move to serverless computing and radically simplified data processing.
Dr Ofer Biran (STSM and Manager, Cloud and Data Technologies, IBM Haifa Research Lab)
Your easy move to serverless computing and radically simplified data processinggvernik
- PyWren-IBM is a Python framework that allows users to easily scale Python code across serverless platforms like IBM Cloud Functions without having to learn the underlying storage or function as a service APIs.
- It addresses challenges like how to integrate existing applications and workflows with serverless computing, how to process large datasets without becoming a storage expert, and how to scale code without major disruptions.
- The document discusses use cases for PyWren-IBM like Monte Carlo simulations, protein folding, and stock price prediction that demonstrate how it can be used for high performance computing workloads.
Multi-Modality Mobile Image Recognition Based on Thermal and Visual CamerasJui-Hsin (Larry) Lai
The advances of mobile computing and sensor technology have turned the mobile devices into powerful instruments. The integration of thermal and visual cameras extends the capability of computer vision, due to the fact that both images reveal different characteristics in images; however, image alignment is a challenge. This paper proposes an effective approach to align image pairs for event detection on mobile through image recognition. We leverage thermal and visual cameras as multi-modality sources for image recognition. By analyzing the heat pattern, the proposed APP can identify the heating sources and help users inspect their house heating system; on the other hand, with applying image recognition, the proposed APP furthermore can help field workers identify the asset condition and provide the guidance to solve their issues.
Tips & Tricks On Architecting Windows Azure For CostsNuno Godinho
The document provides tips and tricks for architecting Windows Azure to reduce costs. It discusses strategies for optimizing compute, bandwidth, storage, transactions, SQL Azure usage, customer awareness, and developer awareness. Specific scenarios are analyzed, such as hosting a static website and storing application data, comparing the costs of different storage and compute options. The key takeaway is that careful planning and optimization of resources can significantly reduce Azure usage costs.
Serverless on AWS : Understanding the hard parts at Froscon 2019Vadym Kazulkin
In unserem Vortrag tauchen wir tiefer in die Serverless-Welt ein und zeigen wie eine produktionsreife Serverless-Anwendung mithilfe von AWS-Cloud mit dem Technologie-Stack API Gateway, SNS, Lambda and DynamoDB aufgebaut werden kann. Dabei gehen wir auf die Herausforderungen der jeweiligen Services ein, wie "cold start" bei Lamda oder "provisioned throughput" und "adaptice capacity" bei DynamoDB. Dabei zeigen wir, welche Strategien und Wege es gibt, damit umzugehen. Außerdem behandeln wir solche Themen wie Implementierung von Aggregationslogik und (Scheduled) Auto Scaling bei DynamoDB. Am Ende werfen wir einen Blick in die Zukunft und sprechen über die erste relationale serverless Datenbank "Aurora Serverless"
Offre Cloud IBM Software [Rational] - Atelier - Forum SaaS et Cloud IBM - Clu...Club Alliances
Présentation préparée par Michel Speranski [IBM Software - Rational] dans le cadre du Forum SaaS et Cloud IBM [5 février 2010], organisé par le Club Alliances
1) Db2 on Cloud is a fully managed SQL database that can be deployed on IBM Cloud with one click.
2) It offers high availability, automatic scaling, and disaster recovery capabilities without complex configuration.
3) The key differences between Db2 on Cloud and Db2 Hosted are that Db2 on Cloud is fully managed while Db2 Hosted provides root access for custom configurations needed for some "lift and shift" migrations.
Moonbot Studios Shoots for the Cloud to Meet Deadlines and Manage Costs
Threatened by deadlines for Academy award submissions, Moonbot Studios faced a shortage of rendering capacity while working on Taking Flight, its newest animated short film, and other important projects. As a small studio with a matching budget, the team did what it does best—it got creative and solved the problem with what they first called “magic.”
In this webinar, the Moonbot team will tell its tale of sending its rendering capacity to Google Compute Engine and how they defied networking odds by caching data close to the animators with an Avere vFXT. Hear Moonbot’s pipeline supervisor tell how they turned cloud data center distance into a non-issue, met deadlines, and gained quantitative benefits that sparked energy in this small team of creative aviators.
In this session, you will learn:
•What drove the Moonbot Studios to move to the cloud
•How they moved complex renders to Google Compute Engine, overcoming data access roadblocks
•Measurable results including speed, economics, flexibility, and creative freedom
The Moonbot Studios flight to the cloud will be supported by Google Cloud Platform and Avere Systems for a complete overview of how the technologies help bring new ideas to life.
S cv0879 cloud-storage-options-edge2015-v4Tony Pearson
IBM is ranked #2 in Cloud storage. Learn how IBM XIV, DS8000, Spectrum Accelerate, FlashSystem, SAN Volume Controller and the rest of the Storwize family built with Spectrum Virtualize, Spectrum Scale, Elastic Storage Server, Storwize V7000 Unified, and offerings from IBM SoftLayer Cloud services.
4156 Twist and cloud-how ibm customers make cics dancenick_garrod
InterConnect 2015 Session 4156 Twist and Cloud - How customers make CICS dance. Putting CICS into the cloud, is someone actually doing this? Since 2012 CICS TS Version 5 has been introducing and continuously enhancing its capabilities to support a Platform As A Service approach. Join speakers from CICS Technical Sales to hear about their experiences and how their customers gained value from cloud capabilities in CICS.
AI & Machine Learning Pipelines with KnativeAnimesh Singh
The document discusses the need for Knative to build cloud-native AI platforms. It describes that an AI lifecycle involves multiple iterative phases like data preparation, model training, deployment, and monitoring. It states that Kubernetes alone is not sufficient and that concepts like building, serving, eventing and pipelines are required to automate the end-to-end AI workflow. It introduces Knative as a set of building blocks on top of Kubernetes that provide these capabilities through custom resource definitions. Specifically, Knative provides capabilities for source-to-container builds, event delivery and subscription, request-driven scalable serving of models, and configuration of CI/CD-style pipelines for Kubernetes applications.
PureApplication form factors. PureApplication Systems, PureApplication Software, PureApplication on Softlayer, PureApplication on Azure. BlueMix compared with PureApplication.
AICamp - Dr Ramine Tinati - Making Computer Vision RealRamine Tinati
The document provides an overview of computer vision and convolutional neural networks (CNNs). It discusses the basic architecture of CNNs, including convolutional layers, pooling layers, fully connected layers, and other concepts like activation functions, loss functions, regularization, and transfer learning. It also covers techniques for measuring CNN performance, such as precision, recall, average precision (AP), and intersection over union (IoU). The goal is to introduce computer vision and explain how CNNs work at a high level.
A Step By Step Guide To Put DB2 On Amazon CloudDeepak Rao
This document provides steps for setting up DB2 9.7 on the Amazon Cloud Platform (AWS). It discusses key AWS services like EC2, S3, EBS, and AMIs. The steps include creating an AWS account, launching a pre-configured DB2 AMI instance on EC2, accepting the product license, configuring security and storage, creating databases, and testing connectivity. Costs for 5 hours of using DB2 on AWS are also estimated.
Cloud Orchestrator - IBM Software Defined Environment EventDenny Muktar
IBM Cloud Orchestrator automates the provisioning and management of IT services across public, private and hybrid clouds. It reduces the number of steps required through an easy-to-use interface and provides access to pre-built automation patterns. The tool integrates management functions like monitoring, metering, and capacity planning. It also includes a catalog of automation packages from IBM and partners that can be dragged and dropped to quickly compose workflows for deploying applications and infrastructure.
Demandware is an eCommerce software provider that leverages cloud computing capabilities. They developed CloudBox, which ports their platform to Amazon EC2, reducing sandbox costs from $64/month to $25-60/month. They also prototyped an image transformation service on EC2 for on-demand resizing and formatting. This allows elastic hosting and worldwide testing. Both projects showed the flexibility and cost savings of the cloud.
Building Serverless Apps with Kafka (Dale Lane, IBM) Kafka Summit London 2019confluent
Serverless (also known as function-as-a-service) is fast emerging as an effective architecture for event-driven applications. Apache OpenWhisk is one of the more popular open-source cloud serverless platforms, and has first-class support for Kafka as a source of events. Come to this session for an introduction to building microservices without servers using OpenWhisk. Ill describe the challenges to building applications using serverless stacks, and the serverless design patterns to help you get started. Ill give a demonstration of how you can use Kafka Connect to invoke serverless actions, and how serverless can be an effective way to host event-processing logic.
This document describes IBM's Private Modular Cloud (PMC) offering. The PMC allows clients to rapidly implement a private cloud deployment within their own data centers using standardized application patterns and automated provisioning. It offers infrastructure as a service (IaaS) and platform as a service (PaaS) capabilities. The PMC can be deployed on IBM, OEM, or open source cloud management software stacks and hardware from IBM, SoftLayer, or the client's own equipment. It provides automated provisioning of virtual machines, databases, and middleware through a self-service portal.
Calculating TCO for Cloud based ApplicationsCoupa Software
Calculating TCO for Cloud based Applications. Listen to the webinar replay here: http://get.coupa.com/tco-webcast.html?src=sh
The IBM Open Cloud Architecture (and Platform)Florian Georg
IBM's open cloud architecture strategy focuses on DevOps practices using a technology stack that includes OpenStack for IaaS, CloudFoundry PaaS, and SaaS applications. This provides on-premise private clouds with IBM PureFlex and SoftLayer, as well as IBM BlueMix for public clouds. IBM UrbanCode Deploy automates application deployments across environments, while IBM DevOps Services provides a pipeline for development, integration and delivery of applications on CloudFoundry and other platforms.
Implementing Large Scale Digital Asset Repositories with Adobe Experience Man...devang-dsshah
This document discusses scaling digital asset management systems to handle large repositories. It begins by describing challenges that can arise from trying to scale a DAM system without optimization. It then provides recommendations for optimizing asset processing workflows and configurations. Finally, it outlines several architectural approaches for scaling a DAM system, such as using separate instances for ingestion/processing and executing intensive tasks. The goal is to first optimize and then scale the system in a way that matches asset usage and adds necessary resources.
AWS Certified Cloud Practitioner Course S7-S10Neal Davis
This deck contains the slides from our AWS Certified Cloud Practitioner video course. It covers:
Section 7 DNS, Elastic Load Balancing, and Auto Scaling
Section 8 Application Services
Section 9 Amazon VPC, Networking, and Hybrid
Section 10 Deployment and Automation
Full course can be found here: https://digitalcloud.training/courses/aws-certified-cloud-practitioner-video-course/
This presentation discusses IBM Cloud IaaS - SoftLayer. It covers SoftLayer infrastructure including networking, storage, servers, operating systems, virtualization, and applications. Specific server configurations and capabilities are presented. Storage options including performance storage, endurance storage, and object storage are described. The presentation also mentions API access, virtualization, operational systems, and additional applications available on SoftLayer.
Serverless computing allows code to be executed in response to events without having to manage infrastructure. IBM Cloud Functions is a serverless platform that executes code in a serverless fashion. It is good for building serverless APIs and microservices, massively parallel compute tasks, data processing workflows, event stream processing, scheduled tasks, and applications that use cognitive services.
Accelerate Digital Transformation with IBM Cloud PrivateMichael Elder
Accelerate the journey to cloud-native, refactor existing mission-critical workloads, and catalyze enterprise digital transformations.
How do you ensure the success of your enterprise in highly competitive market landscapes? How will you deliver new cloud-native workloads, modernize existing estates, and drive integration between them?
Evolve or Fall Behind: Driving Transformation with Containers - Sai Vennam - ...CodeOps Technologies LLP
This presentation was the opening session in the container conference 2018 in Bangalore.
"IBM Developer Advocate Sai Vennam speaks about the latest emerging technology in the container space - from managed Kubernetes offerings to open-source tools like Istio and containerd. You'll also see how container technology is driving transformation in all industries across the world."
URL: www.containerconf.in
IBM Cloud Paris Meetup - 20180628 - Rex on ODM on CloudIBM France Lab
This document discusses deploying IBM Operational Decision Manager (ODM) on Kubernetes. It provides a brief history of moving ODM from on-premise to Docker and Kubernetes. It discusses tips for building Docker images for ODM and using Helm charts to deploy ODM on Kubernetes. Finally, it discusses deploying ODM on IBM Cloud Private using Docker images and Helm charts to provide a production-ready deployment of ODM on Kubernetes.
Building a multi-tenant cloud service from legacy code with Docker containersaslomibm
A reusable architectural pattern to migrate legacy application to a cloud service. The architecture pattern can be used by other legacy applications that need to migrate to cloud. The architecture was validated by the Beta release of the IBM Bluemix Workflow service and Docker containers were key capability to manage separate workflow engines for each tenant combined with cloud database for persistence layer and a content-based routing.
Unleashing Apache Kafka and TensorFlow in the Cloud Kai Wähner
How can you leverage the flexibility and extreme scale in the public cloud combined with your Apache Kafka ecosystem to build scalable, mission-critical machine learning infrastructures, which span multiple public clouds or bridge your on-premise data centre to cloud?
This talk will discuss and demo how you can leverage machine learning technologies such as TensorFlow with your Kafka deployments in public cloud to build a scalable, mission-critical machine learning infrastructure for data preprocessing and ingestion, and model training, deployment and monitoring.
The discussed architecture includes capabilities like scalable data preprocessing for training and predictions, combination of different Deep Learning frameworks, data replication between data centres, intelligent real time microservices running on Kubernetes, and local deployment of analytic models for offline predictions.
Deep Learning UDF for KSQL for Streaming Anomaly Detection of MQTT IoT Sensor Data.:
I built a KSQL UDF for sensor analytics. It leverages the new API features of KSQL to build UDF / UDAF functions easily with Java to do continuous stream processing on incoming events.
Use Case: Connected Cars - Real Time Streaming Analytics using Deep Learning
Continuously process millions of events from connected devices (sensors of cars in this example).
S cv0879 cloud-storage-options-edge2015-v4Tony Pearson
IBM is ranked #2 in Cloud storage. Learn how IBM XIV, DS8000, Spectrum Accelerate, FlashSystem, SAN Volume Controller and the rest of the Storwize family built with Spectrum Virtualize, Spectrum Scale, Elastic Storage Server, Storwize V7000 Unified, and offerings from IBM SoftLayer Cloud services.
4156 Twist and cloud-how ibm customers make cics dancenick_garrod
InterConnect 2015 Session 4156 Twist and Cloud - How customers make CICS dance. Putting CICS into the cloud, is someone actually doing this? Since 2012 CICS TS Version 5 has been introducing and continuously enhancing its capabilities to support a Platform As A Service approach. Join speakers from CICS Technical Sales to hear about their experiences and how their customers gained value from cloud capabilities in CICS.
AI & Machine Learning Pipelines with KnativeAnimesh Singh
The document discusses the need for Knative to build cloud-native AI platforms. It describes that an AI lifecycle involves multiple iterative phases like data preparation, model training, deployment, and monitoring. It states that Kubernetes alone is not sufficient and that concepts like building, serving, eventing and pipelines are required to automate the end-to-end AI workflow. It introduces Knative as a set of building blocks on top of Kubernetes that provide these capabilities through custom resource definitions. Specifically, Knative provides capabilities for source-to-container builds, event delivery and subscription, request-driven scalable serving of models, and configuration of CI/CD-style pipelines for Kubernetes applications.
PureApplication form factors. PureApplication Systems, PureApplication Software, PureApplication on Softlayer, PureApplication on Azure. BlueMix compared with PureApplication.
AICamp - Dr Ramine Tinati - Making Computer Vision RealRamine Tinati
The document provides an overview of computer vision and convolutional neural networks (CNNs). It discusses the basic architecture of CNNs, including convolutional layers, pooling layers, fully connected layers, and other concepts like activation functions, loss functions, regularization, and transfer learning. It also covers techniques for measuring CNN performance, such as precision, recall, average precision (AP), and intersection over union (IoU). The goal is to introduce computer vision and explain how CNNs work at a high level.
A Step By Step Guide To Put DB2 On Amazon CloudDeepak Rao
This document provides steps for setting up DB2 9.7 on the Amazon Cloud Platform (AWS). It discusses key AWS services like EC2, S3, EBS, and AMIs. The steps include creating an AWS account, launching a pre-configured DB2 AMI instance on EC2, accepting the product license, configuring security and storage, creating databases, and testing connectivity. Costs for 5 hours of using DB2 on AWS are also estimated.
Cloud Orchestrator - IBM Software Defined Environment EventDenny Muktar
IBM Cloud Orchestrator automates the provisioning and management of IT services across public, private and hybrid clouds. It reduces the number of steps required through an easy-to-use interface and provides access to pre-built automation patterns. The tool integrates management functions like monitoring, metering, and capacity planning. It also includes a catalog of automation packages from IBM and partners that can be dragged and dropped to quickly compose workflows for deploying applications and infrastructure.
Demandware is an eCommerce software provider that leverages cloud computing capabilities. They developed CloudBox, which ports their platform to Amazon EC2, reducing sandbox costs from $64/month to $25-60/month. They also prototyped an image transformation service on EC2 for on-demand resizing and formatting. This allows elastic hosting and worldwide testing. Both projects showed the flexibility and cost savings of the cloud.
Building Serverless Apps with Kafka (Dale Lane, IBM) Kafka Summit London 2019confluent
Serverless (also known as function-as-a-service) is fast emerging as an effective architecture for event-driven applications. Apache OpenWhisk is one of the more popular open-source cloud serverless platforms, and has first-class support for Kafka as a source of events. Come to this session for an introduction to building microservices without servers using OpenWhisk. Ill describe the challenges to building applications using serverless stacks, and the serverless design patterns to help you get started. Ill give a demonstration of how you can use Kafka Connect to invoke serverless actions, and how serverless can be an effective way to host event-processing logic.
This document describes IBM's Private Modular Cloud (PMC) offering. The PMC allows clients to rapidly implement a private cloud deployment within their own data centers using standardized application patterns and automated provisioning. It offers infrastructure as a service (IaaS) and platform as a service (PaaS) capabilities. The PMC can be deployed on IBM, OEM, or open source cloud management software stacks and hardware from IBM, SoftLayer, or the client's own equipment. It provides automated provisioning of virtual machines, databases, and middleware through a self-service portal.
Calculating TCO for Cloud based ApplicationsCoupa Software
Calculating TCO for Cloud based Applications. Listen to the webinar replay here: http://get.coupa.com/tco-webcast.html?src=sh
The IBM Open Cloud Architecture (and Platform)Florian Georg
IBM's open cloud architecture strategy focuses on DevOps practices using a technology stack that includes OpenStack for IaaS, CloudFoundry PaaS, and SaaS applications. This provides on-premise private clouds with IBM PureFlex and SoftLayer, as well as IBM BlueMix for public clouds. IBM UrbanCode Deploy automates application deployments across environments, while IBM DevOps Services provides a pipeline for development, integration and delivery of applications on CloudFoundry and other platforms.
Implementing Large Scale Digital Asset Repositories with Adobe Experience Man...devang-dsshah
This document discusses scaling digital asset management systems to handle large repositories. It begins by describing challenges that can arise from trying to scale a DAM system without optimization. It then provides recommendations for optimizing asset processing workflows and configurations. Finally, it outlines several architectural approaches for scaling a DAM system, such as using separate instances for ingestion/processing and executing intensive tasks. The goal is to first optimize and then scale the system in a way that matches asset usage and adds necessary resources.
AWS Certified Cloud Practitioner Course S7-S10Neal Davis
This deck contains the slides from our AWS Certified Cloud Practitioner video course. It covers:
Section 7 DNS, Elastic Load Balancing, and Auto Scaling
Section 8 Application Services
Section 9 Amazon VPC, Networking, and Hybrid
Section 10 Deployment and Automation
Full course can be found here: https://digitalcloud.training/courses/aws-certified-cloud-practitioner-video-course/
This presentation discusses IBM Cloud IaaS - SoftLayer. It covers SoftLayer infrastructure including networking, storage, servers, operating systems, virtualization, and applications. Specific server configurations and capabilities are presented. Storage options including performance storage, endurance storage, and object storage are described. The presentation also mentions API access, virtualization, operational systems, and additional applications available on SoftLayer.
Serverless computing allows code to be executed in response to events without having to manage infrastructure. IBM Cloud Functions is a serverless platform that executes code in a serverless fashion. It is good for building serverless APIs and microservices, massively parallel compute tasks, data processing workflows, event stream processing, scheduled tasks, and applications that use cognitive services.
Accelerate Digital Transformation with IBM Cloud PrivateMichael Elder
Accelerate the journey to cloud-native, refactor existing mission-critical workloads, and catalyze enterprise digital transformations.
How do you ensure the success of your enterprise in highly competitive market landscapes? How will you deliver new cloud-native workloads, modernize existing estates, and drive integration between them?
Evolve or Fall Behind: Driving Transformation with Containers - Sai Vennam - ...CodeOps Technologies LLP
This presentation was the opening session in the container conference 2018 in Bangalore.
"IBM Developer Advocate Sai Vennam speaks about the latest emerging technology in the container space - from managed Kubernetes offerings to open-source tools like Istio and containerd. You'll also see how container technology is driving transformation in all industries across the world."
URL: www.containerconf.in
IBM Cloud Paris Meetup - 20180628 - Rex on ODM on CloudIBM France Lab
This document discusses deploying IBM Operational Decision Manager (ODM) on Kubernetes. It provides a brief history of moving ODM from on-premise to Docker and Kubernetes. It discusses tips for building Docker images for ODM and using Helm charts to deploy ODM on Kubernetes. Finally, it discusses deploying ODM on IBM Cloud Private using Docker images and Helm charts to provide a production-ready deployment of ODM on Kubernetes.
Building a multi-tenant cloud service from legacy code with Docker containersaslomibm
A reusable architectural pattern to migrate legacy application to a cloud service. The architecture pattern can be used by other legacy applications that need to migrate to cloud. The architecture was validated by the Beta release of the IBM Bluemix Workflow service and Docker containers were key capability to manage separate workflow engines for each tenant combined with cloud database for persistence layer and a content-based routing.
Unleashing Apache Kafka and TensorFlow in the Cloud Kai Wähner
How can you leverage the flexibility and extreme scale in the public cloud combined with your Apache Kafka ecosystem to build scalable, mission-critical machine learning infrastructures, which span multiple public clouds or bridge your on-premise data centre to cloud?
This talk will discuss and demo how you can leverage machine learning technologies such as TensorFlow with your Kafka deployments in public cloud to build a scalable, mission-critical machine learning infrastructure for data preprocessing and ingestion, and model training, deployment and monitoring.
The discussed architecture includes capabilities like scalable data preprocessing for training and predictions, combination of different Deep Learning frameworks, data replication between data centres, intelligent real time microservices running on Kubernetes, and local deployment of analytic models for offline predictions.
Deep Learning UDF for KSQL for Streaming Anomaly Detection of MQTT IoT Sensor Data.:
I built a KSQL UDF for sensor analytics. It leverages the new API features of KSQL to build UDF / UDAF functions easily with Java to do continuous stream processing on incoming events.
Use Case: Connected Cars - Real Time Streaming Analytics using Deep Learning
Continuously process millions of events from connected devices (sensors of cars in this example).
The document discusses cloud management challenges and how RightScale addresses them with its cloud management platform. It summarizes that RightScale provides unified management of multiple public and private clouds, enables self-service provisioning while maintaining governance controls, and offers automated tools to help enterprises scale their IT operations in the cloud.
Toward Hybrid Cloud Serverless Transparency with Lithops FrameworkLibbySchulze
The document describes using the Lithops framework to simplify serverless data pre-processing of images by extracting faces and aligning them. Lithops allows processing millions of images located in different storage locations in a serverless manner without having to write boilerplate code to access storage or partition data. It handles parallel execution, data access, and coordination to run a user-defined function that pre-processes each image on remote servers near the data. This avoids having to move large amounts of data and allows leveraging serverless cloud compute resources to speed up processing times significantly compared to running everything locally.
Getting Started with MariaDB with DockerMariaDB plc
This document discusses deploying MariaDB databases with Docker from development to production. It recommends using Docker containers to encapsulate dependencies and isolate processes for easy deployment on-premise, in the cloud, or in hybrid environments. It highlights challenges like orchestration complexity and outlines requirements for data durability, self-discovery, self-healing, and application discovery of database clusters. It demonstrates building a Python/Flask app in Docker, deploying it to a Swarm cluster, and scaling the web tier behind HAProxy. It also shows deploying a 3-node Galera MariaDB cluster and 2-node MaxScale proxy for high availability.
IBM Cloud UCC Talk, 8th December 2020 - Cloud Native, Microservices, and Serv...Michael O'Sullivan
A lecture to the students of the University College Cork 3rd year Undergraduate Computer Science class, CS3204 (Cloud Infrastructure and Services) on Cloud Native Computing, Microservices, and Serverless computing, on the IBM Cloud. Several examples and a live demo were included. Also contains discussions of the 12-Factor app, and monolith vs. microservice-based applications.
Helm summit 2019_handling large number of charts_sept 10Shikha Srivastava
Now that you have an application running in Kubernetes, what will your next steps be? Can you deploy this application to any cloud? If someone else wishes to install your helm chart would you have all necessary resources to deploy it successfully? Do you have a certification process to ensure your helm chart is enterprise ready? Creating a helm chart to deploy your application is just the first step, but now you need a process to ensure that the helm chart follows guidelines established by your enterprise and future versions of the chart are created efficiently as part of your CI/CD pipeline. In this presentation, you will learn about effective ways to create, organize and maintain enterprise grade helm charts. We will also discuss how our CI/CD pipeline is implemented using custom linter, verification test cases to make sure only certified charts are promoted into production.
This document discusses the rise of serverless architectures. It begins by defining serverless computing and functions as a service (FaaS), where code is deployed and automatically scales in response to events or triggers, with the vendor handling provisioning and management of servers. Examples of uses cases for FaaS include APIs, bots, file processing, and more. While advantages include scalability and paying only for usage, limitations include statelessness and cold starts. The document outlines the serverless ecosystem and frameworks and how serverless is changing business models, architectures, and operations practices in a more distributed, event-driven way.
At wetter.com we build analytical B2B data products and heavily use Spark and AWS technologies for data processing and analytics. I explain why we moved from AWS EMR to Databricks and Delta and share our experiences from different angles like architecture, application logic and user experience. We will look how security, cluster configuration, resource consumption and workflow changed by using Databricks clusters as well as how using Delta tables simplified our application logic and data operations.
SpringOne Platform 2016
Speakers: Tom Collings; Senior Enterprise Architect, ECS Team. Dustin Ruehle; Director Integration, ECS Team.
Cloud Foundry is a highly-available Platform-as-a-Service that provides organizations a stable environment to host their applications. Pivotal Cloud Foundry also includes the concept of tiles, which provide functionality for other services. When installed, tiles gain the benefits of being managed by the PaaS such as reliability and high availability. Examples of these tiles include MySQL, RabbitMQ, and Spring Cloud Services. Administrators can generate brokered instances of these services which are then available to any application running in the PaaS.
Organizations often find themselves in the position of owning custom functionality (e.g. a payment processing service) that would best be implemented as a tile in the PCF installation. Pivotal has recently introduced a new tile generation utility, which makes the generation of custom tiles a practical endeavor. In this session, attendees will learn: the benefits of generating a tile, some of the criteria used to decide whether a tile or some other mechanism is best for your organization, a short demonstration of a tile generation utility provided by Pivotal, and how to operationalize the maintenance of a tile.
To adopt cloud computing in Enterprise, there are many things to consider and manage. This material is IBM POV on challenges in bringing cloud computing to organization
Scale Machine Learning from zero to millions of users (April 2020)Julien SIMON
This document discusses scaling machine learning models from initial development to production deployment for millions of users. It outlines several options for scaling models from a single instance to large distributed systems, including using Amazon EC2 instances with automation, Docker clusters on ECS/EKS, or the fully managed SageMaker service. SageMaker is recommended for ease of scaling training and inference with minimal infrastructure management required.
This document provides an overview of a hands-on training on using AWS services for developers and those handling data on AWS. It introduces key AWS services like EC2, S3, DynamoDB, RDS, EMR and Redshift. The hands-on sessions guide participants in creating simple web applications using these services, with exercises involving setting up instances, loading and querying databases, and analyzing sample data stored on S3 using EMR. The goal is to help participants build infrastructure for applications and get experience combining different AWS services.
Big Data lay at the core of the strong data economy that is emerging in Europe. Although both large enterprises and SMEs acknowledge the potential of Big Data in disrupting the market and business models, this is not reflected in the growth of the data economy. The lack of trusted, secure, ethical-driven personal data platforms and privacy-aware analytics, hinders the growth of the data economy and creates concerns. The main considerations are related to the secure sharing of personal and proprietary/industrial data, and the definition of a fair remuneration mechanism that will be able to capture, produce, release and cash out the value of data, always for the benefit of all the involved stakeholders.
This webinar will focus on how such concerns that pertain to privacy, ethics and intellectual property rights can be tackled, by allowing individuals to take ownership and control of their data and share them at will, through flexible data sharing and fair compensation schemes with other entities (companies or not), as researched by the DataVaults project.
Big Data lay at the core of the strong data economy that is emerging in Europe. Although both large enterprises and SMEs acknowledge the potential of Big Data in disrupting the market and business models, this is not reflected in the growth of the data economy. The lack of trusted, secure, ethical-driven personal data platforms and privacy-aware analytics, hinders the growth of the data economy and creates concerns. The main considerations are related to the secure sharing of personal and proprietary/industrial data, and the definition of a fair remuneration mechanism that will be able to capture, produce, release and cash out the value of data, always for the benefit of all the involved stakeholders.
This webinar will focus on how such concerns that pertain to privacy, ethics and intellectual property rights can be tackled, by allowing individuals to take ownership and control of their data and share them at will, through flexible data sharing and fair compensation schemes with other entities (companies or not), as researched by the DataVaults project.
Big Data lay at the core of the strong data economy that is emerging in Europe. Although both large enterprises and SMEs acknowledge the potential of Big Data in disrupting the market and business models, this is not reflected in the growth of the data economy. The lack of trusted, secure, ethical-driven personal data platforms and privacy-aware analytics, hinders the growth of the data economy and creates concerns. The main considerations are related to the secure sharing of personal and proprietary/industrial data, and the definition of a fair remuneration mechanism that will be able to capture, produce, release and cash out the value of data, always for the benefit of all the involved stakeholders.
This webinar will focus on how such concerns that pertain to privacy, ethics and intellectual property rights can be tackled, by allowing individuals to take ownership and control of their data and share them at will, through flexible data sharing and fair compensation schemes with other entities (companies or not), as researched by the DataVaults project.
Intro - Three pillars for building a Smart Data Ecosystem: Trust, Security an...Big Data Value Association
Today’s data marketplaces are large, closed ecosystems that are in the hands of few established players or a consortium that decide on the rules, policies, etc.
Yet, the main barrier of the European data economy is the fact that current data spaces and marketplaces are “siloes”, without support for data exchange across their boundaries.
This webinar reveals how these boundaries can be overcome through the i3-MARKET “backplane”, which is an infrastructure able to connect all the stakeholders providing the suitable level of trust (consensus-based self-governing, auditability, reliability, verifiable credentials), security (P2P encryption, cryptographic proofs) and privacy (self-sovereign identity, zero-knowledge proof, explicit user consent).
Three pillars for building a Smart Data Ecosystem: Trust, Security and PrivacyBig Data Value Association
The document discusses three pillars for building a smart data ecosystem: trust, security, and privacy. It summarizes an event on these topics from the i3-MARKET project. Trust can be achieved through blockchain technologies like consensus-based governance, tamper-proof ledgers, and verifiable credentials. Security involves hardware wallets, encryption, and multi-factor authentication. Privacy addresses GDPR requirements like data minimization, user consent, and accountability through self-sovereign identity, selective disclosure of information, and auditable accounting of data exchanges.
Market into context - Three pillars for building a Smart Data Ecosystem: Trus...Big Data Value Association
Today’s data marketplaces are large, closed ecosystems that are in the hands of few established players or a consortium that decide on the rules, policies, etc.
Yet, the main barrier of the European data economy is the fact that current data spaces and marketplaces are “siloes”, without support for data exchange across their boundaries.
This webinar reveals how these boundaries can be overcome through the i3-MARKET “backplane”, which is an infrastructure able to connect all the stakeholders providing the suitable level of trust (consensus-based self-governing, auditability, reliability, verifiable credentials), security (P2P encryption, cryptographic proofs) and privacy (self-sovereign identity, zero-knowledge proof, explicit user consent).
BDV Skills Accreditation - Future of digital skills in Europe reskilling and ...Big Data Value Association
The Digital Skills and Jobs Coalition is a community supported by the European Commission to address the digital skills gap in Europe. It includes 25 National Coalitions and over 500 member organizations working on initiatives to boost digital skills. There is a high demand for ICT specialists but also a shortage, and programs aim to reskill and upskill Europeans. Investments are needed in both digital technologies and skills to take advantage of new technologies like artificial intelligence and ensure data skills and literacy. Bridges between industry and universities are important to develop specialized curricula focused on growing skills in high-demand areas such as software engineering and cybersecurity. The Coalition highlights various member pledges and initiatives that have provided digital skills training and certifications to over
The objective of the workshop is to highlight the need for a pan European level skill recognition for Big Data that stimulates mobility and fulfils the definition of overarching Learning Objectives & Overarching Learning Impacts. It is also meant to get feedback on the formats that are being prepared namely, usage of Badges, Label and EIT Label for professionals.
The objective of the workshop is to highlight the need for a pan European level skill recognition for Big Data that stimulates mobility and fulfils the definition of overarching Learning Objectives & Overarching Learning Impacts. It is also meant to get feedback on the formats that are being prepared namely, usage of Badges, Label and EIT Label for professionals.
BDV Skills Accreditation - Recognizing Data Science Skills with BDV Data Scie...Big Data Value Association
The objective of the workshop is to highlight the need for a pan European level skill recognition for Big Data that stimulates mobility and fulfils the definition of overarching Learning Objectives & Overarching Learning Impacts. It is also meant to get feedback on the formats that are being prepared namely, usage of Badges, Label and EIT Label for professionals.
EIT label intro by Rroberto Prieto
The objective of the workshop is to highlight the need for a pan European level skill recognition for Big Data that stimulates mobility and fulfils the definition of overarching Learning Objectives & Overarching Learning Impacts. It is also meant to get feedback on the formats that are being prepared namely, usage of Badges, Label and EIT Label for professionals.
Muluneh Oli (EIT Digital)
The objective of the workshop is to highlight the need for a pan European level skill recognition for Big Data that stimulates mobility and fulfils the definition of overarching Learning Objectives & Overarching Learning Impacts. It is also meant to get feedback on the formats that are being prepared namely, usage of Badges, Label and EIT Label for professionals.
BDV Skills Accreditation - Definition and ensuring of digital roles and compe...Big Data Value Association
The objective of the workshop is to highlight the need for a pan European level skill recognition for Big Data that stimulates mobility and fulfils the definition of overarching Learning Objectives & Overarching Learning Impacts. It is also meant to get feedback on the formats that are being prepared namely, usage of Badges, Label and EIT Label for professionals.
BigDataPilotDemoDays - I BiDaaS Application to the Manufacturing Sector WebinarBig Data Value Association
The new data-driven industrial revolution highlights the need for big data technologies to unlock the potential in various application domains. To this end, BDV PPP projects I-BiDaaS, BigDataStack, Track & Know and Policy Cloud deliver innovative technologies to address the emerging needs of data operations and applications. To fully exploit the sustainability and take full advantage of the developed technologies, the projects onboarded pilots that exhibit their applicability in a wide variety of sectors. In the Big Data Pilot Demo Days, the projects will showcase the developed and implemented technologies to interested end-users from the industry as well as technology providers, for further adoption.
One of the main goals of the I-BiDaaS project is to provide a Big Data as a self-service solution that will empower the actual employees of European companies in targeted sectors (banking, manufacturing, telecom), i.e., the true decision-makers, with the insights and tools they need in order to make the right decisions in an agile way. In this big data pilot webinar, we will demonstrate in a step by step fashion the I-BiDaaS self-service solution and its application to the banking sector. In more detail, we will present an overview of the I-BiDaaS project focusing on the requirements of the CaixaBank pilot study, the I-BiDaaS architecture with its core technologies, and a step by step demo of the I-BiDaaS solution. Last but not least, we will show through CaixaBank's success story how I-BiDaaS can resolve data availability, data sharing, and breaking silos challenges in the banking domain.
At the heart of this DataBench webinar is the goal to share a benchmarking process helping European organisations developing Big Data Technologies to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.
The webinar aims to provide the audience with a framework and tools to assess the performance and impact of Big Data and AI technologies, by providing real insights coming from DataBench. In addition, representatives from other projects part of the BDV PPP such as DeepHealth and They-Buy-for-You will participate to share the challenges and opportunities they have identified on the use of Big Data, Analytics, AI. The perspective of other projects that also have looked into benchmarking, such as Track&Now and I-BiDaaS will be introduced.
At the heart of this DataBench webinar is the goal to share a benchmarking process helping European organisations developing Big Data Technologies to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.
The webinar aims to provide the audience with a framework and tools to assess the performance and impact of Big Data and AI technologies, by providing real insights coming from DataBench. In addition, representatives from other projects part of the BDV PPP such as DeepHealth and They-Buy-for-You will participate to share the challenges and opportunities they have identified on the use of Big Data, Analytics, AI. The perspective of other projects that also have looked into benchmarking, such as Track&Now and I-BiDaaS will be introduced.
Virtual BenchLearning - I-BiDaaS - Industrial-Driven Big Data as a Self-Servi...Big Data Value Association
At the heart of this DataBench webinar is the goal to share a benchmarking process helping European organisations developing Big Data Technologies to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.
The webinar aims to provide the audience with a framework and tools to assess the performance and impact of Big Data and AI technologies, by providing real insights coming from DataBench. In addition, representatives from other projects part of the BDV PPP such as DeepHealth and They-Buy-for-You will participate to share the challenges and opportunities they have identified on the use of Big Data, Analytics, AI. The perspective of other projects that also have looked into benchmarking, such as Track&Now and I-BiDaaS will be introduced.
The problem of radicalisation is very high on the European agenda as increasing numbers of young European radicals return from Syria and use the internet to disseminate propaganda. To enable policy makers to design policies to address radicalisation effectively, Policy Cloud consortium will collect data from social media and other sources including the open-source Global Terrorism Database (GTD), the Onion City search engine which accesses data over the TOR dark web sites, and Twitter ( through Firehose). The data will be analysed using sentiment analysis and opinion mining software.
Policy Cloud Data Driven Policies against Radicalisation - Participatory poli...Big Data Value Association
The problem of radicalisation is very high on the European agenda as increasing numbers of young European radicals return from Syria and use the internet to disseminate propaganda. To enable policy makers to design policies to address radicalisation effectively, Policy Cloud consortium will collect data from social media and other sources including the open-source Global Terrorism Database (GTD), the Onion City search engine which accesses data over the TOR dark web sites, and Twitter ( through Firehose). The data will be analysed using sentiment analysis and opinion mining software.
How to regulate and control your it-outsourcing provider with process miningProcess mining Evangelist
Oliver Wildenstein is an IT process manager at MLP. As in many other IT departments, he works together with external companies who perform supporting IT processes for his organization. With process mining he found a way to monitor these outsourcing providers.
Rather than having to believe the self-reports from the provider, process mining gives him a controlling mechanism for the outsourced process. Because such analyses are usually not foreseen in the initial outsourcing contract, companies often have to pay extra to get access to the data for their own process.
This presentation provides a comprehensive introduction to Microsoft Excel, covering essential skills for beginners and intermediate users. We will explore key features, formulas, functions, and data analysis techniques.
Bram Vanschoenwinkel is a Business Architect at AE. Bram first heard about process mining in 2008 or 2009, when he was searching for new techniques with a quantitative approach to process analysis. By now he has completed several projects in payroll accounting, public administration, and postal services.
The discovered AS IS process models are based on facts rather than opinions and, therefore, serve as the ideal starting point for change. Bram uses process mining not as a standalone technique but complementary and in combination with other techniques to focus on what is really important: Actually improving the process.
GenAI for Quant Analytics: survey-analytics.aiInspirient
Pitched at the Greenbook Insight Innovation Competition as apart of IIEX North America 2025 on 30 April 2025 in Washington, D.C.
Join us at survey-analytics.ai!
Decision Trees in Artificial-Intelligence.pdfSaikat Basu
Have you heard of something called 'Decision Tree'? It's a simple concept which you can use in life to make decisions. Believe you me, AI also uses it.
Let's find out how it works in this short presentation. #AI #Decisionmaking #Decisions #Artificialintelligence #Data #Analysis
https://saikatbasu.me
Johan Lammers from Statistics Netherlands has been a business analyst and statistical researcher for almost 30 years. In their business, processes have two faces: You can produce statistics about processes and processes are needed to produce statistics. As a government-funded office, the efficiency and the effectiveness of their processes is important to spend that public money well.
Johan takes us on a journey of how official statistics are made. One way to study dynamics in statistics is to take snapshots of data over time. A special way is the panel survey, where a group of cases is followed over time. He shows how process mining could test certain hypotheses much faster compared to statistical tools like SPSS.
indonesia-gen-z-report-2024 Gen Z (born between 1997 and 2012) is currently t...disnakertransjabarda
Gen Z (born between 1997 and 2012) is currently the biggest generation group in Indonesia with 27.94% of the total population or. 74.93 million people.
Frank van Geffen is a Business Analyst at the Rabobank in the Netherlands. The first time Frank encountered Process Mining was in 2002, when he graduated on a method called communication diagnosis. He stumbled upon the topic again in 2008 and was amazed by the possibilities.
Frank shares his experiences after applying process mining in various projects at the bank. He thinks that process mining is most interesting for the Process manager / Process owner (accountable for all aspects of the complete end to end process), the Process Analyst (responsible for performing the process mining analysis), the Process Auditor (responsible for auditing processes), and the IT department (responsible for development/aquisition, delivery and maintanance of the process mining software).
Zig Websoftware creates process management software for housing associations. Their workflow solution is used by the housing associations to, for instance, manage the process of finding and on-boarding a new tenant once the old tenant has moved out of an apartment.
Paul Kooij shows how they could help their customer WoonFriesland to improve the housing allocation process by analyzing the data from Zig's platform. Every day that a rental property is vacant costs the housing association money.
But why does it take so long to find new tenants? For WoonFriesland this was a black box. Paul explains how he used process mining to uncover hidden opportunities to reduce the vacancy time by 4,000 days within just the first six months.
2025年新版意大利毕业证布鲁诺马代尔纳嘉雷迪米音乐学院文凭【q微1954292140】办理布鲁诺马代尔纳嘉雷迪米音乐学院毕业证(Rimini毕业证书)2025年新版毕业证书【q微1954292140】布鲁诺马代尔纳嘉雷迪米音乐学院offer/学位证、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作【q微1954292140】Buy Conservatorio di Musica "B.Maderna G.Lettimi" Diploma购买美国毕业证,购买英国毕业证,购买澳洲毕业证,购买加拿大毕业证,以及德国毕业证,购买法国毕业证(q微1954292140)购买荷兰毕业证、购买瑞士毕业证、购买日本毕业证、购买韩国毕业证、购买新西兰毕业证、购买新加坡毕业证、购买西班牙毕业证、购买马来西亚毕业证等。包括了本科毕业证,硕士毕业证。
主营项目:
1、真实教育部国外学历学位认证《意大利毕业文凭证书快速办理布鲁诺马代尔纳嘉雷迪米音乐学院毕业证定购》【q微1954292140】《论文没过布鲁诺马代尔纳嘉雷迪米音乐学院正式成绩单》,教育部存档,教育部留服网站100%可查.
2、办理Rimini毕业证,改成绩单《Rimini毕业证明办理布鲁诺马代尔纳嘉雷迪米音乐学院办理文凭》【Q/WeChat:1954292140】Buy Conservatorio di Musica "B.Maderna G.Lettimi" Certificates《正式成绩单论文没过》,布鲁诺马代尔纳嘉雷迪米音乐学院Offer、在读证明、学生卡、信封、证明信等全套材料,从防伪到印刷,从水印到钢印烫金,高精仿度跟学校原版100%相同.
3、真实使馆认证(即留学人员回国证明),使馆存档可通过大使馆查询确认.
4、留信网认证,国家专业人才认证中心颁发入库证书,留信网存档可查.
《布鲁诺马代尔纳嘉雷迪米音乐学院留服认证意大利毕业证书办理Rimini文凭不见了怎么办》【q微1954292140】学位证1:1完美还原海外各大学毕业材料上的工艺:水印,阴影底纹,钢印LOGO烫金烫银,LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。
高仿真还原意大利文凭证书和外壳,定制意大利布鲁诺马代尔纳嘉雷迪米音乐学院成绩单和信封。毕业证定制Rimini毕业证【q微1954292140】办理意大利布鲁诺马代尔纳嘉雷迪米音乐学院毕业证(Rimini毕业证书)【q微1954292140】学位证书制作代办流程布鲁诺马代尔纳嘉雷迪米音乐学院offer/学位证成绩单激光标、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决布鲁诺马代尔纳嘉雷迪米音乐学院学历学位认证难题。
意大利文凭布鲁诺马代尔纳嘉雷迪米音乐学院成绩单,Rimini毕业证【q微1954292140】办理意大利布鲁诺马代尔纳嘉雷迪米音乐学院毕业证(Rimini毕业证书)【q微1954292140】安全可靠的布鲁诺马代尔纳嘉雷迪米音乐学院offer/学位证办理原版成绩单、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决布鲁诺马代尔纳嘉雷迪米音乐学院学历学位认证难题。
意大利文凭购买,意大利文凭定制,意大利文凭补办。专业在线定制意大利大学文凭,定做意大利本科文凭,【q微1954292140】复制意大利Conservatorio di Musica "B.Maderna G.Lettimi" completion letter。在线快速补办意大利本科毕业证、硕士文凭证书,购买意大利学位证、布鲁诺马代尔纳嘉雷迪米音乐学院Offer,意大利大学文凭在线购买。
如果您在英、加、美、澳、欧洲等留学过程中或回国后:
1、在校期间因各种原因未能顺利毕业《Rimini成绩单工艺详解》【Q/WeChat:1954292140】《Buy Conservatorio di Musica "B.Maderna G.Lettimi" Transcript快速办理布鲁诺马代尔纳嘉雷迪米音乐学院教育部学历认证书毕业文凭证书》,拿不到官方毕业证;
2、面对父母的压力,希望尽快拿到;
3、不清楚认证流程以及材料该如何准备;
4、回国时间很长,忘记办理;
5、回国马上就要找工作《正式成绩单布鲁诺马代尔纳嘉雷迪米音乐学院文凭详解细节》【q微1954292140】《研究生文凭Rimini毕业证详解细节》办给用人单位看;
6、企事业单位必须要求办理的;
7、需要报考公务员、购买免税车、落转户口、申请留学生创业基金。
【q微1954292140】帮您解决在意大利布鲁诺马代尔纳嘉雷迪米音乐学院未毕业难题(Conservatorio di Musica "B.Maderna G.Lettimi" )文凭购买、毕业证购买、大学文凭购买、大学毕业证购买、买文凭、日韩文凭、英国大学文凭、美国大学文凭、澳洲大学文凭、加拿大大学文凭(q微1954292140)新加坡大学文凭、新西兰大学文凭、爱尔兰文凭、西班牙文凭、德国文凭、教育部认证,买毕业证,毕业证购买,买大学文凭,购买日韩毕业证、英国大学毕业证、美国大学毕业证、澳洲大学毕业证、加拿大大学毕业证(q微1954292140)新加坡大学毕业证、新西兰大学毕业证、爱尔兰毕业证、西班牙毕业证、德国毕业证,回国证明,留信网认证,留信认证办理,学历认证。从而完成就业。布鲁诺马代尔纳嘉雷迪米音乐学院毕业证办理,布鲁诺马代尔纳嘉雷迪米音乐学院文凭办理,布鲁诺马代尔纳嘉雷迪米音乐学院成绩单办理和真实留信认证、留服认证、布鲁诺马代尔纳嘉雷迪米音乐学院学历认证。学院文凭定制,布鲁诺马代尔纳嘉雷迪米音乐学院原版文凭补办,扫描件文凭定做,100%文凭复刻。
特殊原因导致无法毕业,也可以联系我们帮您办理相关材料:
1:在布鲁诺马代尔纳嘉雷迪米音乐学院挂科了,不想读了,成绩不理想怎么办???
2:打算回国了,找工作的时候,需要提供认证《Rimini成绩单购买办理布鲁诺马代尔纳嘉雷迪米音乐学院毕业证书范本》【Q/WeChat:1954292140】Buy Conservatorio di Musica "B.Maderna G.Lettimi" Diploma《正式成绩单论文没过》有文凭却得不到认证。又该怎么办???意大利毕业证购买,意大利文凭购买,
3:回国了找工作没有布鲁诺马代尔纳嘉雷迪米音乐学院文凭怎么办?有本科却要求硕士又怎么办?
Lalit Wangikar, a partner at CKM Advisors, is an experienced strategic consultant and analytics expert. He started looking for data driven ways of conducting process discovery workshops. When he read about process mining the first time around, about 2 years ago, the first feeling was: “I wish I knew of this while doing the last several projects!".
Interviews are subject to all the whims human recollection is subject to: specifically, recency, simplification and self preservation. Interview-based process discovery, therefore, leaves out a lot of “outliers” that usually end up being one of the biggest opportunity area. Process mining, in contrast, provides an unbiased, fact-based, and a very comprehensive understanding of actual process execution.
Tijn van der Heijden is a business analyst with Deloitte. He learned about process mining during his studies in a BPM course at Eindhoven University of Technology and became fascinated with the fact that it was possible to get a process model and so much performance information out of automatically logged events of an information system.
Tijn successfully introduced process mining as a new standard to achieve continuous improvement for the Rabobank during his Master project. At his work at Deloitte, Tijn has now successfully been using this framework in client projects.
保密服务圣地亚哥州立大学英文毕业证书影本美国成绩单圣地亚哥州立大学文凭【q微1954292140】办理圣地亚哥州立大学学位证(SDSU毕业证书)毕业证书购买【q微1954292140】帮您解决在美国圣地亚哥州立大学未毕业难题(San Diego State University)文凭购买、毕业证购买、大学文凭购买、大学毕业证购买、买文凭、日韩文凭、英国大学文凭、美国大学文凭、澳洲大学文凭、加拿大大学文凭(q微1954292140)新加坡大学文凭、新西兰大学文凭、爱尔兰文凭、西班牙文凭、德国文凭、教育部认证,买毕业证,毕业证购买,买大学文凭,购买日韩毕业证、英国大学毕业证、美国大学毕业证、澳洲大学毕业证、加拿大大学毕业证(q微1954292140)新加坡大学毕业证、新西兰大学毕业证、爱尔兰毕业证、西班牙毕业证、德国毕业证,回国证明,留信网认证,留信认证办理,学历认证。从而完成就业。圣地亚哥州立大学毕业证办理,圣地亚哥州立大学文凭办理,圣地亚哥州立大学成绩单办理和真实留信认证、留服认证、圣地亚哥州立大学学历认证。学院文凭定制,圣地亚哥州立大学原版文凭补办,扫描件文凭定做,100%文凭复刻。
特殊原因导致无法毕业,也可以联系我们帮您办理相关材料:
1:在圣地亚哥州立大学挂科了,不想读了,成绩不理想怎么办???
2:打算回国了,找工作的时候,需要提供认证《SDSU成绩单购买办理圣地亚哥州立大学毕业证书范本》【Q/WeChat:1954292140】Buy San Diego State University Diploma《正式成绩单论文没过》有文凭却得不到认证。又该怎么办???美国毕业证购买,美国文凭购买,【q微1954292140】美国文凭购买,美国文凭定制,美国文凭补办。专业在线定制美国大学文凭,定做美国本科文凭,【q微1954292140】复制美国San Diego State University completion letter。在线快速补办美国本科毕业证、硕士文凭证书,购买美国学位证、圣地亚哥州立大学Offer,美国大学文凭在线购买。
美国文凭圣地亚哥州立大学成绩单,SDSU毕业证【q微1954292140】办理美国圣地亚哥州立大学毕业证(SDSU毕业证书)【q微1954292140】录取通知书offer在线制作圣地亚哥州立大学offer/学位证毕业证书样本、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决圣地亚哥州立大学学历学位认证难题。
主营项目:
1、真实教育部国外学历学位认证《美国毕业文凭证书快速办理圣地亚哥州立大学办留服认证》【q微1954292140】《论文没过圣地亚哥州立大学正式成绩单》,教育部存档,教育部留服网站100%可查.
2、办理SDSU毕业证,改成绩单《SDSU毕业证明办理圣地亚哥州立大学成绩单购买》【Q/WeChat:1954292140】Buy San Diego State University Certificates《正式成绩单论文没过》,圣地亚哥州立大学Offer、在读证明、学生卡、信封、证明信等全套材料,从防伪到印刷,从水印到钢印烫金,高精仿度跟学校原版100%相同.
3、真实使馆认证(即留学人员回国证明),使馆存档可通过大使馆查询确认.
4、留信网认证,国家专业人才认证中心颁发入库证书,留信网存档可查.
《圣地亚哥州立大学学位证书的英文美国毕业证书办理SDSU办理学历认证书》【q微1954292140】学位证1:1完美还原海外各大学毕业材料上的工艺:水印,阴影底纹,钢印LOGO烫金烫银,LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。
高仿真还原美国文凭证书和外壳,定制美国圣地亚哥州立大学成绩单和信封。毕业证网上可查学历信息SDSU毕业证【q微1954292140】办理美国圣地亚哥州立大学毕业证(SDSU毕业证书)【q微1954292140】学历认证生成授权声明圣地亚哥州立大学offer/学位证文凭购买、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决圣地亚哥州立大学学历学位认证难题。
圣地亚哥州立大学offer/学位证、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作【q微1954292140】Buy San Diego State University Diploma购买美国毕业证,购买英国毕业证,购买澳洲毕业证,购买加拿大毕业证,以及德国毕业证,购买法国毕业证(q微1954292140)购买荷兰毕业证、购买瑞士毕业证、购买日本毕业证、购买韩国毕业证、购买新西兰毕业证、购买新加坡毕业证、购买西班牙毕业证、购买马来西亚毕业证等。包括了本科毕业证,硕士毕业证。
快速办理新西兰成绩单奥克兰理工大学毕业证【q微1954292140】办理奥克兰理工大学毕业证(AUT毕业证书)diploma学位认证【q微1954292140】新西兰文凭购买,新西兰文凭定制,新西兰文凭补办。专业在线定制新西兰大学文凭,定做新西兰本科文凭,【q微1954292140】复制新西兰Auckland University of Technology completion letter。在线快速补办新西兰本科毕业证、硕士文凭证书,购买新西兰学位证、奥克兰理工大学Offer,新西兰大学文凭在线购买。
主营项目:
1、真实教育部国外学历学位认证《新西兰毕业文凭证书快速办理奥克兰理工大学毕业证的方法是什么?》【q微1954292140】《论文没过奥克兰理工大学正式成绩单》,教育部存档,教育部留服网站100%可查.
2、办理AUT毕业证,改成绩单《AUT毕业证明办理奥克兰理工大学展示成绩单模板》【Q/WeChat:1954292140】Buy Auckland University of Technology Certificates《正式成绩单论文没过》,奥克兰理工大学Offer、在读证明、学生卡、信封、证明信等全套材料,从防伪到印刷,从水印到钢印烫金,高精仿度跟学校原版100%相同.
3、真实使馆认证(即留学人员回国证明),使馆存档可通过大使馆查询确认.
4、留信网认证,国家专业人才认证中心颁发入库证书,留信网存档可查.
《奥克兰理工大学毕业证定制新西兰毕业证书办理AUT在线制作本科文凭》【q微1954292140】学位证1:1完美还原海外各大学毕业材料上的工艺:水印,阴影底纹,钢印LOGO烫金烫银,LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。
高仿真还原新西兰文凭证书和外壳,定制新西兰奥克兰理工大学成绩单和信封。专业定制国外毕业证书AUT毕业证【q微1954292140】办理新西兰奥克兰理工大学毕业证(AUT毕业证书)【q微1954292140】学历认证复核奥克兰理工大学offer/学位证成绩单定制、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决奥克兰理工大学学历学位认证难题。
新西兰文凭奥克兰理工大学成绩单,AUT毕业证【q微1954292140】办理新西兰奥克兰理工大学毕业证(AUT毕业证书)【q微1954292140】学位认证要多久奥克兰理工大学offer/学位证在线制作硕士成绩单、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决奥克兰理工大学学历学位认证难题。
奥克兰理工大学offer/学位证、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作【q微1954292140】Buy Auckland University of Technology Diploma购买美国毕业证,购买英国毕业证,购买澳洲毕业证,购买加拿大毕业证,以及德国毕业证,购买法国毕业证(q微1954292140)购买荷兰毕业证、购买瑞士毕业证、购买日本毕业证、购买韩国毕业证、购买新西兰毕业证、购买新加坡毕业证、购买西班牙毕业证、购买马来西亚毕业证等。包括了本科毕业证,硕士毕业证。
特殊原因导致无法毕业,也可以联系我们帮您办理相关材料:
1:在奥克兰理工大学挂科了,不想读了,成绩不理想怎么办???
2:打算回国了,找工作的时候,需要提供认证《AUT成绩单购买办理奥克兰理工大学毕业证书范本》【Q/WeChat:1954292140】Buy Auckland University of Technology Diploma《正式成绩单论文没过》有文凭却得不到认证。又该怎么办???新西兰毕业证购买,新西兰文凭购买,
【q微1954292140】帮您解决在新西兰奥克兰理工大学未毕业难题(Auckland University of Technology)文凭购买、毕业证购买、大学文凭购买、大学毕业证购买、买文凭、日韩文凭、英国大学文凭、美国大学文凭、澳洲大学文凭、加拿大大学文凭(q微1954292140)新加坡大学文凭、新西兰大学文凭、爱尔兰文凭、西班牙文凭、德国文凭、教育部认证,买毕业证,毕业证购买,买大学文凭,购买日韩毕业证、英国大学毕业证、美国大学毕业证、澳洲大学毕业证、加拿大大学毕业证(q微1954292140)新加坡大学毕业证、新西兰大学毕业证、爱尔兰毕业证、西班牙毕业证、德国毕业证,回国证明,留信网认证,留信认证办理,学历认证。从而完成就业。奥克兰理工大学毕业证办理,奥克兰理工大学文凭办理,奥克兰理工大学成绩单办理和真实留信认证、留服认证、奥克兰理工大学学历认证。学院文凭定制,奥克兰理工大学原版文凭补办,扫描件文凭定做,100%文凭复刻。
Your easy move to serverless computing and radically simplified data processing
1. Dr. Ofer Biran, Dr. Gil Vernik
IBM Haifa Research Lab
Your Easy Move to Serverless Computing:
Radically Simplified Data Processing
2. Agenda
What problem we solve
Why serverless computing
Easy move to serverless with PyWren-IBM
PyWren-IBM use cases
3. This project has received funding from the European Union’s Horizon 2020 research and innovation
programme under grant agreement No 825184.
http://cloudbutton.eu
4. Problem: Large Scale Simulations
• Alice is working in the risk
management department at the bank
• She needs to evaluate a new contract
• She decided to run a Monte-Carlo
simulation to evaluate the contract
• About 100,000,000 calculations
needed for a reasonable estimation
This Photo by Unknown Author is licensed under CC BY-SA
5. The challenge
How and where to scale the code of Monte Carlo simulations?
Business logic
6. Problem: Big Data processing
• Maria needs to run face detection using TensorFlow over millions of
images. The process requires raw images to be pre-processed
before used by TensorFlow
• Maria wrote a code and tested it on a single image
• Now she needs to execute the same code at massive scale, with
parallelism, on terabytes of data stored in object storage
Raw image Pre-processed image
7. The challenge
How to scale the code to run in parallel on terabytes of data without become
a systems expert in scaling the code and learn storage semantics?
IBM Cloud Object
Storage
8. So the Challenges are:
• How and where to scale the code?
• How to process massive data sets without become a storage
expert?
• How to scale certain flows from the existing applications
without major disruption to the existing system?
9. VMs, containers and the rest
• Naive solution to scale an application - provision high resourced virtual
machines and run your application there
• Complicated , Time consuming, Expensive
• Recent trend is to leverage container platforms
• Containers have better granularity comparing to VMs, better resource
allocations, and so on.
• Docker containers became popular, yet many challenges how to ”containerize”
existing code or applications
• Comparing VMs and containers is beyond the scope of this talk…
• Serverless: Function as a Service platforms
10. code()
Event Action
Deploy
the code
• Unit of computation is a function
• Function is a short lived task
• Smart activation, event driven, etc.
• Usually stateless
• Transparent auto-scaling
• Pay only for what you use
• No administration
• All other aspects of the execution are
delegated to the Cloud Provider
Serverless: Function as a Service
IBM Cloud Functions
11. Are there still challenges?
• How to integrate FaaS into existing applications and frameworks
without major disruption?
• Users need to be familiar with API of storage and FaaS platform
• How to control and coordinate invocations
• How to scale the input and generate output
13. Push to the cloud with PyWren
• Serverless for more use cases
(not just event based or “Glue” for services)
• Push to the Cloud experience
• Designed to scale Python application at massive scale
Python code
Serverless
action1
Serverless action 2
Serverless
action1000
………
………
14. Cloud Button Toolkit
• PyWren-IBM ( aka CloudButton Toolkit) is a novel Python
framework extending the original Rise Lab PyWren
• 600+ commits to PyWren-IBM on top of PyWren
• Being developed as part of CloudButton project
• Led by IBM Research Haifa
• Open source https://github.com/pywren/pywren-ibm-cloud
15. PyWren-IBM example
data = [1,2,3,4]
def my_map_function(x):
return x+7
PyWren-IBM
print (cb.get_result())
[8,9,10,11]
IBM Cloud Functions
import pywren_ibm_cloud as cbutton
cb = cbutton.ibm_cf_executor()
cb.map(my_map_function, data))
19. PyWren-IBM over Object Store
data = “cos://mybucket/year=2019/”
def my_map_function(obj, boto3_client):
// business logic
return obj.name
PyWren-IBM
print (cb.get_result())
[d1.csv, d2.csv, d3.csv,….]
IBM Cloud Functions
import pywren_ibm_cloud as cbutton
cb = cbutton.ibm_cf_executor()
cb.map(my_map_function, data))
20. Unique differentiations of PyWren-IBM
• Pluggable implementation for FaaS platforms
• IBM Cloud Functions, Apache OpenWhisk, OpenShift by Red Hat, Kubernetess
• Supports Docker containers
• Seamless integration with Python notebooks
• Advanced input data partitioner
• Data discovery to process large amounts of data stored in IBM Cloud Object
storage, chunking of CSV files, supports user provided partition logic
• Unique functionalities
• Map-Reduce, monitoring, retry, in-memory queues, authentication token reuse,
pluggable storage backends, and many more..
21. What PyWren-IBM good for
• Batch processing, UDF, ETL, HPC and Monte Carlo simulations
• Embarrassingly parallel workload or problems - often the case where there is little or no
dependency between parallel tasks
• Subset of map-reduce flows
Input Data
Results
………Tasks 1 2 3 n
22. What PyWren-IBM requires?
Function as a Service platform
• IBM Cloud Functions,
Apache OpenWhisk
• OpenShift, Kubernetes, etc.
Storage accessed from
Function as a Service platform
through S3 API
• IBM Cloud Object Storage
• Red Hat Ceph
23. PyWren-IBM and HPC This Photo by Unknown Author is licensed under CC BY-SA
24. HPC on “super” computers
• Dedicated HPC super computers
• Designed to be super fast
• Calculations usually rely on Message
Passing Interface (MPI)
• Pros : HPC super computers
• Cons: HPC super computers
DedicatedHPC
supercomputers
HPC simulations
25. HPC on VMs
• No need to buy expensive machines
• Frameworks to run HPC flows over VMs
• Flows usually depends on MPI, data locality
• Recent academic interest
• Pros : Virtual Machines
• Cons: Virtual Machines
VirtualMachines
private,cloud,etc.
HPC simulations
26. HPC on Containers
Containers
• Good granularity, parallelism, resource
allocation, etc.
• Research papers, frameworks
• Singularity / Docker containers
• Pros: containers
• Cons: moving entire application into
containers usually requires re-design
HPC simulations
27. HPC on FaaS with PyWren-IBM
HPC simulations
Containers
• FaaS is a perfect platform to scale code and
applications
• Many FaaS platforms allows users to use
Docker containers
• Code can contain any dependencies
• PyWren-IBM is natural fit for many HPC
flows
• Pros : the easy move to serverless
• Cons: not for all use cases
• Try it yourself…
FaaS
28. Stock price prediction with PyWren-IBM
• A mathematical approach for stock price modelling. More accurate for
modelling prices over longer periods of time
• We run Monte Carlo stock prediction over IBM Cloud Functions with
PyWren-IBM
• With PyWren-IBM total code is ~40 lines. Without PyWren-IBM
running the same code requires 100s of additional lines of code
Number of
forecasts
Local run (1CPU,
4 cores)
IBM CF Total number of CF
invocations
100,000 10,000 seconds ~70 seconds 1000
• We run 1000 concurrent invocations, each consuming 1024MB of memory
• Each invocation predicted a forecast of 1080 days and used 100 random samples per prediction.Totally we did 108,000,000 calculations
About 2500 forecasts predicted stock price around $130
30. PyWren-IBM for data processing
Face recognition experiment with PyWren-IBM over IBM Cloud
• Align faces using open source from 1000 images stored in IBM cloud
object storage
• Given python code that know how to extract face from a single image
• Run from any Python notebook
31. Processing images without PyWren-IBM
import logging
import os
import sys
import time
import shutil
import cv2
from openface.align_dlib import AlignDlib
logger = logging.getLogger(__name__)
temp_dir = '/tmp'
def preprocess_image(bucket, key, data_stream, storage_handler):
"""
Detect face, align and crop :param input_path. Write output to :param output_path
:param bucket: COS bucket
:param key: COS key (object name ) - may contain delimiters
:param storage_handler: can be used to read / write data from / into COS
"""
crop_dim = 180
#print("Process bucket {} key {}".format(bucket, key))
sys.stdout.write(".")
# key of the form /subdir1/../subdirN/file_name
key_components = key.split('/')
file_name = key_components[len(key_components)-1]
input_path = temp_dir + '/' + file_name
if not os.path.exists(temp_dir + '/' + 'output'):
os.makedirs(temp_dir + '/' +'output')
output_path = temp_dir + '/' +'output/' + file_name
with open(input_path, 'wb') as localfile:
shutil.copyfileobj(data_stream, localfile)
exists = os.path.isfile(temp_dir + '/' +'shape_predictor_68_face_landmarks')
if exists:
pass;
else:
res = storage_handler.get_object(bucket, 'lfw/model/shape_predictor_68_face_landmarks.dat', stream =
True)
with open(temp_dir + '/' +'shape_predictor_68_face_landmarks', 'wb') as localfile:
shutil.copyfileobj(res, localfile)
align_dlib = AlignDlib(temp_dir + '/' +'shape_predictor_68_face_landmarks')
image = _process_image(input_path, crop_dim, align_dlib)
if image is not None:
#print('Writing processed file: {}'.format(output_path))
cv2.imwrite(output_path, image)
f = open(output_path, "rb")
processed_image_path = os.path.join('output',key)
storage_handler.put_object(bucket, processed_image_path, f)
os.remove(output_path)
else:
pass;
#print("Skipping filename: {}".format(input_path))
os.remove(input_path)
def _process_image(filename, crop_dim, align_dlib):
image = None
aligned_image = None
image = _buffer_image(filename)
if image is not None:
aligned_image = _align_image(image, crop_dim, align_dlib)
else:
raise IOError('Error buffering image: {}'.format(filename))
return aligned_image
def _buffer_image(filename):
logger.debug('Reading image: {}'.format(filename))
image = cv2.imread(filename, )
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
return image
def _align_image(image, crop_dim, align_dlib):
bb = align_dlib.getLargestFaceBoundingBox(image)
aligned = align_dlib.align(crop_dim, image, bb, landmarkIndices=AlignDlib.INNER_EYES_AND_BOTTOM_LIP)
if aligned is not None:
aligned = cv2.cvtColor(aligned, cv2.COLOR_BGR2RGB)
return aligned
import ibm_boto3
import ibm_botocore
from ibm_botocore.client import Config
from ibm_botocore.credentials import DefaultTokenManager
t0 = time.time()
client_config = ibm_botocore.client.Config(signature_version='oauth',
max_pool_connections=200)
api_key = config['ibm_cos']['api_key']
token_manager = DefaultTokenManager(api_key_id=api_key)
cos_client = ibm_boto3.client('s3', token_manager=token_manager,
config=client_config, endpoint_url=config['ibm_cos']['endpoint'])
try:
paginator = cos_client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket="gilvdata", Prefix = 'lfw/test/images')
print (page_iterator)
except ibm_botocore.exceptions.ClientError as e:
print(e)
class StorageHandler:
def __init__(self, cos_client):
self.cos_client = cos_client
def get_object(self, bucket_name, key, stream=False, extra_get_args={}):
"""
Get object from COS with a key. Throws StorageNoSuchKeyError if the given key does not exist.
:param key: key of the object
:return: Data of the object
:rtype: str/bytes
"""
try:
r = self.cos_client.get_object(Bucket=bucket_name, Key=key, **extra_get_args)
if stream:
data = r['Body']
else:
data = r['Body'].read()
return data
except ibm_botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "NoSuchKey":
raise StorageNoSuchKeyError(key)
else:
raise e
def put_object(self, bucket_name, key, data):
"""
Put an object in COS. Override the object if the key already exists.
:param key: key of the object.
:param data: data of the object
:type data: str/bytes
:return: None
"""
try:
res = self.cos_client.put_object(Bucket=bucket_name, Key=key, Body=data)
status = 'OK' if res['ResponseMetadata']['HTTPStatusCode'] == 200 else 'Error'
try:
log_msg='PUT Object {} size {} {}'.format(key, len(data), status)
logger.debug(log_msg)
except:
log_msg='PUT Object {} {}'.format(key, status)
logger.debug(log_msg)
except ibm_botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "NoSuchKey":
raise StorageNoSuchKeyError(key)
else:
raise e
temp_dir = '/home/dsxuser/.tmp'
storage_client = StorageHandler(cos_client)
for page in page_iterator:
if 'Contents' in page:
for item in page['Contents']:
key = item['Key']
r = cos_client.get_object(Bucket='gilvdata', Key=key)
data = r['Body']
preprocess_image('gilvdata', key, data, storage_client)
Business Logic Boiler plate
• Loop over all images
• Close to 100 lines of “boiler
plate” code to find the
images, read and write the
objects, etc.
• Data scientist needs to be
familiar with s3 API
• Execution time
approximately 36
minutes!
32. Processing images with PyWren-IBM
import logging
import os
import sys
import time
import shutil
import cv2
from openface.align_dlib import AlignDlib
logger = logging.getLogger(__name__)
temp_dir = '/tmp'
def preprocess_image(bucket, key, data_stream, storage_handler):
"""
Detect face, align and crop :param input_path. Write output to :param output_path
:param bucket: COS bucket
:param key: COS key (object name ) - may contain delimiters
:param storage_handler: can be used to read / write data from / into COS
"""
crop_dim = 180
#print("Process bucket {} key {}".format(bucket, key))
sys.stdout.write(".")
# key of the form /subdir1/../subdirN/file_name
key_components = key.split('/')
file_name = key_components[len(key_components)-1]
input_path = temp_dir + '/' + file_name
if not os.path.exists(temp_dir + '/' + 'output'):
os.makedirs(temp_dir + '/' +'output')
output_path = temp_dir + '/' +'output/' + file_name
with open(input_path, 'wb') as localfile:
shutil.copyfileobj(data_stream, localfile)
exists = os.path.isfile(temp_dir + '/' +'shape_predictor_68_face_landmarks')
if exists:
pass;
else:
res = storage_handler.get_object(bucket, 'lfw/model/shape_predictor_68_face_landmarks.dat', stream =
True)
with open(temp_dir + '/' +'shape_predictor_68_face_landmarks', 'wb') as localfile:
shutil.copyfileobj(res, localfile)
align_dlib = AlignDlib(temp_dir + '/' +'shape_predictor_68_face_landmarks')
image = _process_image(input_path, crop_dim, align_dlib)
if image is not None:
#print('Writing processed file: {}'.format(output_path))
cv2.imwrite(output_path, image)
f = open(output_path, "rb")
processed_image_path = os.path.join('output',key)
storage_handler.put_object(bucket, processed_image_path, f)
os.remove(output_path)
else:
pass;
#print("Skipping filename: {}".format(input_path))
os.remove(input_path)
def _process_image(filename, crop_dim, align_dlib):
image = None
aligned_image = None
image = _buffer_image(filename)
if image is not None:
aligned_image = _align_image(image, crop_dim, align_dlib)
else:
raise IOError('Error buffering image: {}'.format(filename))
return aligned_image
def _buffer_image(filename):
logger.debug('Reading image: {}'.format(filename))
image = cv2.imread(filename, )
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
return image
def _align_image(image, crop_dim, align_dlib):
bb = align_dlib.getLargestFaceBoundingBox(image)
aligned = align_dlib.align(crop_dim, image, bb, landmarkIndices=AlignDlib.INNER_EYES_AND_BOTTOM_LIP)
if aligned is not None:
aligned = cv2.cvtColor(aligned, cv2.COLOR_BGR2RGB)
return aligned
pw = pywren.ibm_cf_executor(config=config, runtime='pywren-dlib-runtime_3.5')
bucket_name = 'gilvdata/lfw/test/images'
results = pw.map_reduce(preprocess_image, bucket_name, None, None).get_result()
Business Logic Boiler plate
• Under 3 lines of “boiler
plate”!
• Data scientist does not
need to use s3 API!
• Execution time is 35s
• 35 seconds as compared
to 36 minutes!
33. Metabolomics with PyWren-IBM
Metabolomics application with PyWren-IBM
• With EMBL - European Molecular Biology Laboratory
• Originally uses Apache Spark and deployed across VMs in the cloud
• We use PyWren-IBM to provide a prototype that deploys Metabolite
annotation engine as a serverless actions in the IBM Cloud
• https://github.com/metaspace2020/pywren-annotation-pipeline
Benefit of PyWren-IBM
• Better control of data partitions
• Speed of deployment, no need VMs
• Elasticity and automatic scale
• And many more..
34. Molecular Databases
up to 100M molecular strings
Dataset Input
up to 50GB binary file
Behind the scenes
Molecularannotationengine
Imageprocessing
IBM Cloud Functions
Results
Metabolite annotation engine
deployed by PyWren-IBM
35. tumorbrain
A whole-body section of a
mouse model showing
localization of glutamate.
Glutamate is linked to cancer
where it supports proliferation
and growth of cancer cells.
glutamate
Annotation results
37. Summary
Serverless: extremely promising for HPC and big data processing
But… a Cloud-Button is needed…
PyWren-IBM to the rescue -
Demonstrated benefits for HPC and batch data pre-processing
For more use case and examples visit our project page – all open source!
https://github.com/pywren/pywren-ibm-cloud
Thank you
[email protected]