Apache Stratos (incubating) Hangout IV - Stratos Controller and CLI InternalsIsuru Perera
Slides used for Apache Stratos (incubating) Fourth Hangout. Hangout video can be found at http://youtu.be/VtF9DVGKbTQ
Website: http://stratos.incubator.apache.org
Mailing List:
Subscribe: [email protected]
Post (after subscription): [email protected]
Social Media:
Google+: https://plus.google.com/103515557134069849802
Twitter: https://twitter.com/ApacheStratos
Facebook: https://www.facebook.com/apache.stratos
LinkedIn: http://www.linkedin.com/groups?home=&gid=5131436
This document discusses integrating Complex Event Processing (CEP) into Apache Stratos 4.0.0. It introduces CEP and why it would benefit Stratos. Events from load balancers and cartridge agents regarding requests, faults, and health status would be sent to CEP. CEP would process these events, calculate averages and derivatives, and publish summarized events to a message broker. The document outlines CEP integration architecture and components like stream definitions, input/output adapters, event builders, and execution plans. Demonstrations of the CEP integration are provided.
This document summarizes the 4.0 architecture of Apache Stratos, an open source PaaS incubation project. The architecture allows users to deploy composite applications across multiple clouds by defining cartridges, autoscaling policies, and deployment policies. It uses a Stratos Manager, Cloud Controller, Message Broker, Auto Scaler, Load Balancer, and Cartridge Agents that communicate through topics. This enables capabilities like dynamic scaling, load balancing, smart policies, and multi-cloud deployment.
The document discusses the configuration files used by the Cloud Controller (CC) such as cloud-controller.xml, cartridge.xml, and service.xml, explaining how they define common configurations, cartridges, and services respectively and how they are used to build CC's information model; it also addresses how these files are deployed and how hot updating and deployment of the files works.
This document summarizes a presentation on Apache Stratos Cloud Controller. It defines Cloud Controller as a bridge between application and infrastructure layers that enables scaling across IaaS providers and shares service topology information. It supports AWS, OpenStack, and vCloud by default. Adding a new IaaS provider involves extending the Iaas interface, packaging as an OSGi fragment, and defining in the cloud-controller configuration. The presentation outlines Cloud Controller's architecture and configuration files to be discussed in more detail next week.
The Role of Elastic Load Balancer - Apache StratosImesh Gunaratne
The document discusses the role of the Elastic Load Balancer (ELB) in Apache Stratos PaaS. It describes how the ELB uses components like Synapse, Axis2, and Tribes to distribute incoming traffic across backend nodes and auto-scale capacity. The ELB handles load balancing, failover, auto-scaling, and multi-tenancy. It integrates with Stratos by receiving topology information, load balancing requests to cartridge instances, and auto-scaling the number of instances based on traffic.
This document discusses the load balancer component architecture in Apache Stratos 4.0.0. It uses the Apache Synapse mediation framework and includes a load balancing extension. The load balancer component architecture is event-driven and supports load balancing algorithms, session management, multi-tenancy, statistics reporting, and service/subscription-aware load balancing. It also includes an extension API with a reference implementation for HAProxy to integrate third party load balancers.
Apache Stratos is an open source Platform as a Service (PaaS) framework that was originally developed by WSO2 and has been donated to the Apache Foundation. It deploys onto Infrastructure as a Service providers like AWS, OpenStack, and vCloud to create a secure, multi-tenant, elastic PaaS. Stratos uses components like the Cloud Controller, Elastic Load Balancer, Artifact Distribution Coordinator, and Management Console to manage deploying applications onto virtual machines and containers. Developers can create custom cartridges that plug into Stratos to deliver new services like PHP, ESB, or other platforms as a service offerings.
How to Autoscale in Apache Cloudstack using LiquiD AutoScalerBob Bennink
The presentation shows how to use LiquiD AutoScaler for autoscaling in Apache CloudStack. It works with any load-balancer and does not require any coding skills.
Setting the infrastructure up takes minutes and no additional hardware or software is required.
Its benefits are better better responsiveness during high traffic, high available websites, lower costs and lower energy consumption.
It provides IaaS providers with additional functionality to their CloudStack cloud orchestration platforms.
It can be used to monitor websites, web apps and other infrastructure and runs in public and private clouds.
This document outlines several Spring Cloud components: Hystrix for fault tolerance, Eureka for service discovery, Zuul for routing and filtering, Ribbon for load balancing, Feign for declarative REST clients, Spring Cloud Config for external configuration, and Spring Cloud Bus for distributed messaging between apps. It provides brief descriptions and configuration examples for each component.
This document discusses autoscaling without NetScaler. It describes using XenServer API Round Robin Databases (RRDs) to reproduce the load balancing and monitoring capabilities of NetScaler for autoscaling. The document explains that RRDs store performance metrics on a per-host and per-VM basis. It details how CloudStack can query the RRDs over HTTP to monitor service groups and trigger scaling based on predefined policies, similar to how NetScaler operates with autoscaling. It provides examples of downloading whole RRDs and updates from the RRDs to retrieve monitoring data for hosts and VMs.
Fully fault tolerant real time data pipeline with docker and mesos Rahul Kumar
This document discusses building a fault-tolerant real-time data pipeline using Docker and Mesos. It describes how Mesos provides resource sharing and isolation across frameworks like Marathon and Spark Streaming. Spark Streaming ingests live data streams and processes them in micro-batches to provide fault tolerance. The document advocates using Mesos to run Spark Streaming jobs across clusters for high availability and recommends techniques like checkpointing and write-ahead logs to ensure no data loss during failures.
Best Practice for Deploying Application with HeatEthan Lynn
Long Quan Sha and Ethan Lynn from IBM and Tian Hua Huang from Huawei presented on best practices for Heat resource modules and deployment patterns. They discussed Heat introduction, software deployment options using cloud-init and software deployments, building custom images, and signal transport methods. They also covered creating resource modules based on business concepts to make templates easier to understand and compose common deployment patterns. Finally, they demonstrated resource modules and a load balancing autoscaling group template.
AWS Study Group - Chapter 10 - Matching Supply and Demand [Solution Architect...QCloudMentor
This chapter discusses how to match computing resource supply and demand on AWS. It covers Elastic Load Balancing (ELB) and its three types - classic, application, and network load balancers. It also discusses AWS Auto Scaling, which allows automatically scaling computing resources up or down based on demand. Key attributes of ELB like stateless/stateful, internet-facing/internal-facing, and cross-zone load balancing are explained.
This document discusses auto scaling with Apache CloudStack and Citrix NetScaler. It provides an overview of auto scaling and its benefits like high availability, cost savings and energy savings. It describes example use cases and life cycle of auto scaling including initial configuration, scaling up as traffic increases and scaling down as traffic decreases. Key steps involved are creating VM templates, adding auto scale policies in CloudStack, and configuring Citrix NetScaler.
The document discusses and compares container orchestration platforms Docker Swarm Mode and Kubernetes. It provides an overview of the core concepts and features of each, including services, networks, and application deployment. Both platforms make it easy to install a basic cluster but Kubernetes is noted as having more advanced built-in features while Docker Swarm Mode has a simpler learning curve. There is no clear winner predicted as the platforms continue to improve and develop rapidly.
This document discusses building a fault-tolerant Kafka cluster on AWS to handle 2.5 billion requests per day. It covers choosing AWS instance types and broker counts, spreading brokers across availability zones, configuring replication and partitioning, automating fault tolerance, adding metrics and alerts, and testing the cluster's resilience. Key decisions include broker placement, topic partitioning, Zookeeper ensemble sizing, and automation to dynamically reassign partitions and change configurations in response to failures or added capacity.
Akka at Enterprise Scale: Performance Tuning Distributed ApplicationsLightbend
Organizations like Starbucks, HPE, and PayPal (see our customers) have selected the Akka toolkit for their enterprise scale distributed applications; and when it comes to squeezing out the best possible performance, the secret is using two particular modules in tandem: Akka Cluster and Akka Streams.
In this webinar by Nolan Grace, Senior Solution Architect at Lightbend, we look at these two Akka modules and discuss the features that will push your application architecture to the next tier of performance.
For the full blog post, including the video, visit: https://www.lightbend.com/blog/akka-at-enterprise-scale-performance-tuning-distributed-applications
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...HostedbyConfluent
Apache Kafka is a key part of the Big Data infrastructure at Salesforce, enabling publish/subscribe and data transport in near real-time at enterprise scale handling trillions of messages per day. In this session, hear from the teams at Salesforce that manage Kafka as a service, running over a hundred clusters across on-premise and public cloud environments with over 99.9% availability. Hear about best practices and innovations, including:
* How to manage multi-tenant clusters in a hybrid environment
* High volume data pipelines with Mirus replicating data to Kafka and blob storage
* Kafka Fault Injection Framework built on Trogdor and Kibosh
* Automated recovery without data loss
* Using Envoy as an SNI-routing Kafka gateway
We hope the audience will have practical takeaways for building, deploying, operating, and managing Kafka at scale in the enterprise.
KSQL is an open source streaming SQL engine for Apache Kafka. Come hear how KSQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We'll cover both how to get started with KSQL and some under-the-hood details of how it all works.
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...HostedbyConfluent
You have learned about Kafka event sourcing with streams and using Kafka as a database, but you may be having a tough time wrapping your head around what that means and what challenges you will face. Kafka’s exactly once semantics, data retention rules, and stream DSL make it a great database for real-time transaction processing. This talk will focus on how to use Kafka events as a database. We will talk about using KTables vs GlobalKTables, and how to apply them to patterns we use with traditional databases. We will go over a real-world example of joining events against existing data and some issues to be aware of. We will finish covering some important things to remember about state stores, partitions, and streams to help you avoid problems when your data sets become large.
Heat is an OpenStack template-based orchestration service that allows users to describe infrastructure and applications in text files called Heat Orchestration Templates (HOT) and automate the deployment of multi-component, multi-tier applications across OpenStack and other platforms. Heat provides the ability to define infrastructure resources like servers, networks, routers, and security groups and specify relationships between resources. It comprises several Python applications that work together to provision and manage OpenStack resources through a REST API according to the templates.
Real time data pipeline with spark streaming and cassandra with mesosRahul Kumar
This document discusses building real-time data pipelines with Apache Spark Streaming and Cassandra using Mesos. It provides an overview of data management challenges, introduces Cassandra and Spark concepts. It then describes how to use the Spark Cassandra Connector to expose Cassandra tables as Spark RDDs and write back to Cassandra. It recommends designing scalable pipelines by identifying bottlenecks, using efficient data parsing, proper data modeling, and compression.
This document discusses CloudStack's extensibility through plug-ins and adaptors that allow third-party integration. It describes how plug-ins can define new APIs, network elements, services, and management components. Plug-ins have well-defined interfaces and configurations to integrate new functionality without modifying CloudStack code.
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lightbend
Akka Streams and its amazing handling of streaming with back-pressure should be no surprise to anyone. But it takes a couple of use cases to really see it in action - especially in use cases where the amount of work continues to increase as you’re processing it. This is where back-pressure really shines.
In this talk for Architects and Dev Managers by Akara Sucharitakul, Principal MTS for Global Platform Frameworks at PayPal, Inc., we look at how back-pressure based on Akka Streams and Kafka is being used at PayPal to handle very bursty workloads.
In addition, Akara will also share experiences in creating a platform based on Akka and Akka Streams that currently processes over 1 billion transactions per day (on just 8 VMs), with the aim of helping teams adopt these technologies. In this webinar, you will:
*Start with a sample web crawler use case to examine what happens when each processing pass expands to a larger and larger workload to process.
*Review how we use the buffering capabilities in Kafka and the back-pressure with asynchronous processing in Akka Streams to handle such bursts.
*Look at lessons learned, plus some constructive “rants” about the architectural components, the maturity, or immaturity you’ll expect, and tidbits and open source goodies like memory-mapped stream buffers that can be helpful in other Akka Streams and/or Kafka use cases.
Microservices with Netflix OSS and Spring Cloudacogoluegnes
Netflix OSS and Spring Cloud provide frameworks for building microservice applications that can run on various infrastructures. They include libraries like Eureka for service registration and discovery, Ribbon for load balancing, and Hystrix for fault tolerance via circuit breaking. Spring Cloud builds on Spring Boot and "Spring-ifies" Netflix libraries, providing a easy way to add configuration, service discovery, and other features needed for microservices. These frameworks allow building microservice applications that are decoupled from underlying infrastructure and can run on traditional or cloud-based systems.
This document discusses autonomic decentralized elasticity management of cloud applications. It presents a reinforcement learning approach called ADEC where each instance independently monitors and manages its resources and applications using a set of simple states and actions. The instances coordinate using a distributed key-value store to optimize placement of applications across instances and elastically scale instances up and down to meet application objectives like response time thresholds. An evaluation on Amazon EC2 showed ADEC could dynamically provision instances and applications in response to changing workloads to satisfy application service level objectives with low overhead.
Autonomic Decentralised Elasticity Management of Cloud ApplicationsSrikumar Venugopal
This document presents an autonomic decentralized elasticity management system called ADEC for cloud applications. ADEC uses reinforcement learning where each instance independently monitors itself and learns optimal management policies over time through a reward/punishment system. Instances coordinate using a distributed hash table to provision and dynamically place applications across instances to maximize utilization while meeting response time and availability requirements. The system was evaluated on Amazon EC2 using a hotel management application under varying workloads, demonstrating ADEC's ability to independently start and shutdown instances to meet application objectives.
How to Autoscale in Apache Cloudstack using LiquiD AutoScalerBob Bennink
The presentation shows how to use LiquiD AutoScaler for autoscaling in Apache CloudStack. It works with any load-balancer and does not require any coding skills.
Setting the infrastructure up takes minutes and no additional hardware or software is required.
Its benefits are better better responsiveness during high traffic, high available websites, lower costs and lower energy consumption.
It provides IaaS providers with additional functionality to their CloudStack cloud orchestration platforms.
It can be used to monitor websites, web apps and other infrastructure and runs in public and private clouds.
This document outlines several Spring Cloud components: Hystrix for fault tolerance, Eureka for service discovery, Zuul for routing and filtering, Ribbon for load balancing, Feign for declarative REST clients, Spring Cloud Config for external configuration, and Spring Cloud Bus for distributed messaging between apps. It provides brief descriptions and configuration examples for each component.
This document discusses autoscaling without NetScaler. It describes using XenServer API Round Robin Databases (RRDs) to reproduce the load balancing and monitoring capabilities of NetScaler for autoscaling. The document explains that RRDs store performance metrics on a per-host and per-VM basis. It details how CloudStack can query the RRDs over HTTP to monitor service groups and trigger scaling based on predefined policies, similar to how NetScaler operates with autoscaling. It provides examples of downloading whole RRDs and updates from the RRDs to retrieve monitoring data for hosts and VMs.
Fully fault tolerant real time data pipeline with docker and mesos Rahul Kumar
This document discusses building a fault-tolerant real-time data pipeline using Docker and Mesos. It describes how Mesos provides resource sharing and isolation across frameworks like Marathon and Spark Streaming. Spark Streaming ingests live data streams and processes them in micro-batches to provide fault tolerance. The document advocates using Mesos to run Spark Streaming jobs across clusters for high availability and recommends techniques like checkpointing and write-ahead logs to ensure no data loss during failures.
Best Practice for Deploying Application with HeatEthan Lynn
Long Quan Sha and Ethan Lynn from IBM and Tian Hua Huang from Huawei presented on best practices for Heat resource modules and deployment patterns. They discussed Heat introduction, software deployment options using cloud-init and software deployments, building custom images, and signal transport methods. They also covered creating resource modules based on business concepts to make templates easier to understand and compose common deployment patterns. Finally, they demonstrated resource modules and a load balancing autoscaling group template.
AWS Study Group - Chapter 10 - Matching Supply and Demand [Solution Architect...QCloudMentor
This chapter discusses how to match computing resource supply and demand on AWS. It covers Elastic Load Balancing (ELB) and its three types - classic, application, and network load balancers. It also discusses AWS Auto Scaling, which allows automatically scaling computing resources up or down based on demand. Key attributes of ELB like stateless/stateful, internet-facing/internal-facing, and cross-zone load balancing are explained.
This document discusses auto scaling with Apache CloudStack and Citrix NetScaler. It provides an overview of auto scaling and its benefits like high availability, cost savings and energy savings. It describes example use cases and life cycle of auto scaling including initial configuration, scaling up as traffic increases and scaling down as traffic decreases. Key steps involved are creating VM templates, adding auto scale policies in CloudStack, and configuring Citrix NetScaler.
The document discusses and compares container orchestration platforms Docker Swarm Mode and Kubernetes. It provides an overview of the core concepts and features of each, including services, networks, and application deployment. Both platforms make it easy to install a basic cluster but Kubernetes is noted as having more advanced built-in features while Docker Swarm Mode has a simpler learning curve. There is no clear winner predicted as the platforms continue to improve and develop rapidly.
This document discusses building a fault-tolerant Kafka cluster on AWS to handle 2.5 billion requests per day. It covers choosing AWS instance types and broker counts, spreading brokers across availability zones, configuring replication and partitioning, automating fault tolerance, adding metrics and alerts, and testing the cluster's resilience. Key decisions include broker placement, topic partitioning, Zookeeper ensemble sizing, and automation to dynamically reassign partitions and change configurations in response to failures or added capacity.
Akka at Enterprise Scale: Performance Tuning Distributed ApplicationsLightbend
Organizations like Starbucks, HPE, and PayPal (see our customers) have selected the Akka toolkit for their enterprise scale distributed applications; and when it comes to squeezing out the best possible performance, the secret is using two particular modules in tandem: Akka Cluster and Akka Streams.
In this webinar by Nolan Grace, Senior Solution Architect at Lightbend, we look at these two Akka modules and discuss the features that will push your application architecture to the next tier of performance.
For the full blog post, including the video, visit: https://www.lightbend.com/blog/akka-at-enterprise-scale-performance-tuning-distributed-applications
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...HostedbyConfluent
Apache Kafka is a key part of the Big Data infrastructure at Salesforce, enabling publish/subscribe and data transport in near real-time at enterprise scale handling trillions of messages per day. In this session, hear from the teams at Salesforce that manage Kafka as a service, running over a hundred clusters across on-premise and public cloud environments with over 99.9% availability. Hear about best practices and innovations, including:
* How to manage multi-tenant clusters in a hybrid environment
* High volume data pipelines with Mirus replicating data to Kafka and blob storage
* Kafka Fault Injection Framework built on Trogdor and Kibosh
* Automated recovery without data loss
* Using Envoy as an SNI-routing Kafka gateway
We hope the audience will have practical takeaways for building, deploying, operating, and managing Kafka at scale in the enterprise.
KSQL is an open source streaming SQL engine for Apache Kafka. Come hear how KSQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We'll cover both how to get started with KSQL and some under-the-hood details of how it all works.
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...HostedbyConfluent
You have learned about Kafka event sourcing with streams and using Kafka as a database, but you may be having a tough time wrapping your head around what that means and what challenges you will face. Kafka’s exactly once semantics, data retention rules, and stream DSL make it a great database for real-time transaction processing. This talk will focus on how to use Kafka events as a database. We will talk about using KTables vs GlobalKTables, and how to apply them to patterns we use with traditional databases. We will go over a real-world example of joining events against existing data and some issues to be aware of. We will finish covering some important things to remember about state stores, partitions, and streams to help you avoid problems when your data sets become large.
Heat is an OpenStack template-based orchestration service that allows users to describe infrastructure and applications in text files called Heat Orchestration Templates (HOT) and automate the deployment of multi-component, multi-tier applications across OpenStack and other platforms. Heat provides the ability to define infrastructure resources like servers, networks, routers, and security groups and specify relationships between resources. It comprises several Python applications that work together to provision and manage OpenStack resources through a REST API according to the templates.
Real time data pipeline with spark streaming and cassandra with mesosRahul Kumar
This document discusses building real-time data pipelines with Apache Spark Streaming and Cassandra using Mesos. It provides an overview of data management challenges, introduces Cassandra and Spark concepts. It then describes how to use the Spark Cassandra Connector to expose Cassandra tables as Spark RDDs and write back to Cassandra. It recommends designing scalable pipelines by identifying bottlenecks, using efficient data parsing, proper data modeling, and compression.
This document discusses CloudStack's extensibility through plug-ins and adaptors that allow third-party integration. It describes how plug-ins can define new APIs, network elements, services, and management components. Plug-ins have well-defined interfaces and configurations to integrate new functionality without modifying CloudStack code.
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lightbend
Akka Streams and its amazing handling of streaming with back-pressure should be no surprise to anyone. But it takes a couple of use cases to really see it in action - especially in use cases where the amount of work continues to increase as you’re processing it. This is where back-pressure really shines.
In this talk for Architects and Dev Managers by Akara Sucharitakul, Principal MTS for Global Platform Frameworks at PayPal, Inc., we look at how back-pressure based on Akka Streams and Kafka is being used at PayPal to handle very bursty workloads.
In addition, Akara will also share experiences in creating a platform based on Akka and Akka Streams that currently processes over 1 billion transactions per day (on just 8 VMs), with the aim of helping teams adopt these technologies. In this webinar, you will:
*Start with a sample web crawler use case to examine what happens when each processing pass expands to a larger and larger workload to process.
*Review how we use the buffering capabilities in Kafka and the back-pressure with asynchronous processing in Akka Streams to handle such bursts.
*Look at lessons learned, plus some constructive “rants” about the architectural components, the maturity, or immaturity you’ll expect, and tidbits and open source goodies like memory-mapped stream buffers that can be helpful in other Akka Streams and/or Kafka use cases.
Microservices with Netflix OSS and Spring Cloudacogoluegnes
Netflix OSS and Spring Cloud provide frameworks for building microservice applications that can run on various infrastructures. They include libraries like Eureka for service registration and discovery, Ribbon for load balancing, and Hystrix for fault tolerance via circuit breaking. Spring Cloud builds on Spring Boot and "Spring-ifies" Netflix libraries, providing a easy way to add configuration, service discovery, and other features needed for microservices. These frameworks allow building microservice applications that are decoupled from underlying infrastructure and can run on traditional or cloud-based systems.
This document discusses autonomic decentralized elasticity management of cloud applications. It presents a reinforcement learning approach called ADEC where each instance independently monitors and manages its resources and applications using a set of simple states and actions. The instances coordinate using a distributed key-value store to optimize placement of applications across instances and elastically scale instances up and down to meet application objectives like response time thresholds. An evaluation on Amazon EC2 showed ADEC could dynamically provision instances and applications in response to changing workloads to satisfy application service level objectives with low overhead.
Autonomic Decentralised Elasticity Management of Cloud ApplicationsSrikumar Venugopal
This document presents an autonomic decentralized elasticity management system called ADEC for cloud applications. ADEC uses reinforcement learning where each instance independently monitors itself and learns optimal management policies over time through a reward/punishment system. Instances coordinate using a distributed hash table to provision and dynamically place applications across instances to maximize utilization while meeting response time and availability requirements. The system was evaluated on Amazon EC2 using a hotel management application under varying workloads, demonstrating ADEC's ability to independently start and shutdown instances to meet application objectives.
- Solr 7.0 introduces new autoscaling capabilities including autoscaling policies and preferences to define the desired state of the cluster, and APIs to manage autoscaling.
- Triggers are added in Solr 7.1 to activate autoscaling when nodes join or leave the cluster to rebalance replicas according to policies.
- Collection APIs now use autoscaling policies and preferences to determine optimal replica placement. Future work will add more triggers and actions for autoscaling.
Tech Talk on Autoscaling in Apache StratosVishanth Bala
This document provides an overview of autoscaling in Apache Stratos. It discusses the autoscaling architecture, which uses a CEP engine to analyze metrics like requests, memory usage, and CPU usage to predict future loads and scale instances accordingly. The autoscaling workflow and lifecycle are also described. Autoscale policies define thresholds that trigger scaling, and rules engines use rules to determine the appropriate scaling action. Dependent and group scaling are also covered, where scaling is triggered based on dependencies between cartridges or to scale an entire group. The presentation concludes with a demo and Q&A.
Smart monitoring how does oracle rac manage resource, state ukoug19Anil Nair
An important requirement for HA and to provide scalability is to detect problems and resolve them quickly before the user sessions get affected. Oracle RAC along with its Family of Solutions work together cohesively to detect conditions such as "Un-responsive Instances", Network issues quickly and resolve them by either redirecting the work to other instances or redundant network paths
Performance tuning Grails Applications GR8Conf US 2014Lari Hotari
The document discusses performance tuning for Grails applications. It covers optimizing for latency, throughput, and quality of operations. Key aspects discussed include Amdahl's law, Little's law, profiling tools, common pitfalls, and recommendations for improving performance like eliminating blocking and focusing on feedback cycles. Specific techniques mentioned include optimizing SQL queries, reducing regular expressions, improving caching, and using thread dumps to diagnose production issues.
Deployment Checkup: How to Regularly Tune Your Cloud Environment - RightScale...RightScale
The document discusses the importance of regularly tuning cloud environments through deployment checkups. It highlights key areas to focus on during checkups, including cost optimization by identifying unused resources, ensuring optimal server utilization, implementing high availability and disaster recovery strategies, addressing security issues, and following best practices. Regular checkups help avoid inefficiencies that can arise over time and ensure deployments are optimized for cost, performance, availability and security.
The document discusses performance tuning for Grails applications. It outlines that performance aspects include latency, throughput, and quality of operations. Performance tuning optimizes costs and ensures systems meet requirements under high load. Amdahl's law states that parallelization cannot speed up non-parallelizable tasks. The document recommends measuring and profiling, making single changes in iterations, and setting up feedback cycles for development and production environments. Common pitfalls in profiling Grails applications are also discussed.
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
Analysis of Database Issues using AHF and Machine Learning v2 - AOUG2022Sandesh Rao
VP AIOps for the Autonomous Database discusses AHF (Autonomous Health Framework) and how it is used for automatic issue detection, diagnostic collection and analysis of database issues using machine learning. AHF includes components like EXAchk for compliance checking, TFA for issue notification and support, and the Cluster Health Monitor and Cluster Health Advisor for monitoring cluster and database health using techniques like anomaly detection, diagnostics, and prognosis. It also discusses how AHF is calibrated and used to check for health issues and potential failures in an autonomous database deployment.
More and more clients are looking to understand the capabilities of the OTM/G-Log architecture and configuration in order better tune OTM. Usually, this is required because of poor OTM performance or as preparation for significant changes to OTM configuration, volume, or platform. The client may be experience poor performance throughout the entire system or for a very specific use cases. The primary objective of a Performance Tuning Exercise is to understand how OTM is being utilized and to recommend solution to improve the performance of OTM.
We recommend and will take the audience through a “ground-up” performance tuning exercise, starting with hardware and infrastructure, moving to Java and App server tuning, then to OTM technical tuning and finally to the OTM functional tuning (data, agents, etc).
These audits may identify hardware constraints at each tier, networking, or other infrastructure constraints causing sub-optimal system performance. Simply stated, the performance audit will identify all bottlenecks in the system if they exist.
In many cases the largest performance is impacts are not hardware, but rather how the data is configured within the application. So as part of the exercise we will analyze database performance, individual SQL queries, OTM Queues, bulk planning parameters, agents, rates and the settlement process.
Understanding the methods which will best identify these bottlenecks will help you avoid performance issues early in your project and save considerable time and expense as you near go-live. This presentation will guide you through the steps necessary to better understand what is impacting performance and how to best handle it. It will provide lessons learned and tools that are available to you better manage and maintain a healthy OTM environment.
Presented by Chris Plough at MavenWire
Auto-Train a Time-Series Forecast Model With AML + ADBDatabricks
Supply Chain, Healthcare, Insurance, and Finance often require highly accurate forecasting models in an enterprise large-scale fashion. With Azure Machine Learning on Azure Databricks, the scale and speed to large-scale many-models can be achieved and time-to-product decreases drastically. The better-together story poses an enterprise approach to AI/ML.
Azure AutoML offers an elegant solution efficiently to build forecasting models on Azure Databricks compute solving sophisticated business problems. The presentation covers the Azure Machine Learning + Azure Databricks approach (see slides attached) while the demo covers a hands-on business problem building a forecasting model in Azure Databricks using Azure Machine Learning. The AI/ML better-together story is elevated as MLFlow for Data Science Lifecycle Management and Hyperopt for distributed model execution completes AI/ML enterprise readiness for industry problems.
1. The document discusses research activities related to reducing energy consumption by at least 30% through the development of core source technologies for universal operating systems.
2. It describes four papers being presented, including ones on system and device latency modeling, power management frameworks for embedded systems, and automatic selection of power policies for operating systems.
3. It also summarizes four research topics from the National University, including performance evaluation of parallel applications using a power-aware paging method on next-generation memory architectures.
You know PowerShell and you must have heard of DSC, but 6 years after its creation, where are we at?
Join Gael Colas, a well-known DSC contributor and Microsoft MVP, in this session, he will show what's happening in the DSC community, how to get started, where to find information or help, and some best practices to follow.
He will demo some concepts, practices and use cases, share some code, and insights about who's behind DSC and what they are doing, so you have no excuse for not learning Configuration Management!
- Demo code: https://github.com/gaelcolas/packer-templates
Follow & connect with Gael Colas:
- Twitter: https://twitter.com/gaelcolas
- LinkedIn: https://www.linkedin.com/in/gaelcolas/
- Blog: https://gaelcolas.com/
Thanks to dotdigital Group (https://dotdigital.com / https://twitter.com/dotdigital) for providing the venue, food and drinks. We very much appreciate your continued support of our community of PowerShell & DevOps tech enthusiasts.
Join our next event at https://www.meetup.com/PowerShell-London-UK/. We are running at least one Meetup every month.
#PowerShell #PSDSC
High availability of data across geographic regions for search and analytical applications is a challenging task. Mission critical applications need effective failover strategies across data centers. Apache Solr offers Cross Data Center Replication (CDCR) as a feature from 6.0 and has added more features in subsequent releases.
The first part of session will center on an active-passive design model with one data-center as the primary and other data-centers as secondary clusters. The second design model centers on designing an active-active bidirectional setup such that both querying and indexing traffic can gracefully be redirected to the failover cluster.
The third part of session will center on an actual use case: An analytics application with high availability. We will discuss the improvements observed in terms of maintenance, performance, and throughput.
The session concludes with challenges and/or limitations in the current design and what improvements are forthcoming for Cross Data Center Replication in Apache Solr.
The document discusses a presentation about Apache Stratos and autoscaling. It includes an agenda that covers a brief introduction to Apache Stratos, database as a service challenges, autoscaling concepts, autoscaling in Apache Stratos using policies, and a demonstration of MongoDB running on Apache Stratos with autoscaling. The presenters are then introduced.
This document discusses Solr autoscaling, including:
1. Defining autoscaling policies and preferences to specify how a Solr cluster should be balanced and scaled.
2. How autoscaling helps perform operations like adding/removing nodes to maintain the desired cluster state.
3. Various autoscaling APIs and suggestions that can be used to automatically scale collections based on the defined policies and preferences.
Parallel processing involves executing multiple tasks simultaneously using multiple cores or processors. It can provide performance benefits over serial processing by reducing execution time. When developing parallel applications, developers must identify independent tasks that can be executed concurrently and avoid issues like race conditions and deadlocks. Effective parallelization requires analyzing serial code to find optimization opportunities, designing and implementing concurrent tasks, and testing and tuning to maximize performance gains.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
2. Agenda
•
•
Introduction to Autoscaling
Apache Stratos Autoscaler Architecture
•
•
•
•
Component Architecture
•
Event Flow
Autoscale Policy
•
Introduction to Autoscaler Policy
•
Autoscaling Strategies
Deployment Policy •
Introduction to Deployment Policy
•
Capacity Planning with Deployment Policy
•
Partition Selection Algorithms
Rules Engine
•
Reasons for a Rule Engine
•
Rules for Apache Stratos Autoscaler
1
3. Introduction to Autoscaling
•
•
•
•
What is scalability
• Horizontal and vertical scaling
What is high availability
Proceedure
• Clustering
• Load balancing
Autoscaling
• Automating the capacity planning
2
4. Introduction to Autoscaling Contd.
•
•
•
•
•
Flexible cloud solution
•
•
User-defined policies, health status checks, and schedules.
Use case, cost, performance, and infrastructure.
SLA(Service Level Agreement) aware elastic cloud
•
•
•
QoS , SLA aware services
Daecision factors to consumers
Solves performance, availability, and economic costs issues
Capacity planning
•
Automated control of cloud: cost vs. Qos, find appropriate cloud model.
Cost Factor
•
Reduce economic cost and energy footprint
Proceedure
•
•
•
•
Online observation and monitoring the cloud
Trigger an event if a SLA violation happened
Use control theory and mathematical operations
Handling seasonal patterns. E.g. Year ends/ Weekends patterns
3
7. Health Statistics as Events
•
CEP receives events
•
Requests in flight from Load balancer
•
Cartridge instance health statistics from Cartridge agent
•
CPU consumption
•
Memory consumption
•
CEP summarize the Average, Gradient, and Second derivative events of,
•
Requests in Flight
•
CPU consumption
•
Memory consumption
6
8. Autoscale Policy
•
Deployable Xml model
•
Keeps Load thresholds for threshold based rules evaluation.
•
Deployed by Dev-ops or similar role at start or later
•
Hot Deployable.
•
Users Selects an Autoscale Policy on His Preference at Subscription Time.
7
10. Deployment Policy
•
Deployable xml model
•
Keeps the Capacity Planning.
•
Deployed by Dev-ops or similar role at start or later
•
Hot Deployable.
•
Users Selects an Deployment Policy on His Preference at Subscription Time.
9
12. Rules Engine
•
•
•
Why a Rules Engine
•
•
•
Ease of use: No byte code and easy to modify
Readable
Performances and sclability
Uses Drools engine as the default rules engine
Rules
• Minimum Rule
• Scale Up Rule
• Scale Down Rule
• Terminate All Rule
11
13. Autoscaling Rules: Sample in
Drools
rule "Minimum Rule"
dialect "mvel"
when
$service : Service ()
$cluster : Cluster () from $service.getClusters()
$policy : AutoscalePolicy(id == $cluster.autoscalePolicyName ) from $manager.getPolicyList()
$partition : Partition () from $policy.getHAPolicy().getPartitions()
$clusterContext : ClusterContext() from $context.getClusterContext($cluster.getClusterId())
eval($clusterContext.getPartitionCount($partition.getId()) < $partition.getPartitionMembersMin() )
then
int memberCountToBeIncreased = 1;
if($evaluator.delegateSpawn($partition,$cluster.getClusterId(), memberCountToBeIncreased)){
$clusterContext.increaseMemberCountInPartition($partition.getId(), memberCountToBeIncreased);
}
end
12
14. Minimum Rule
•
This runs when a “cluster created” event is received
•
Scan through all the partitions of the cluster and find minimums
•
Call CC for spawning required minimum instances
•
This will be also run periodically(with a higher time interval than scale up/down
rules) to assure that the minimum count is preserved
13
15. Scale Up/Down Rule
•
These rules run periodically
•
Evaluate load details(Received from CEP) against their thresholds(defined in
Autoscale Policy).
•
Decide whether to scale up, scale down, or do nothing
•
Call CC for spawning instances in selected partitions
14
18. Average of CPU/ Memory Consumption for a Specific Cluster
17
19. Terminate All Rule
•
•
•
This runs when a “cluster removed” event is received
Scan through all the partitions of the cluster and find member IDs to be
terminated
Call CC for terminating those instances
18
20. Fault Handling Scenarios
Process
VM
Down
Up
Down
Down(It can be that
agent is crashed)
Up
Up(but network
issue)
Decision flow
•
•
•
•
•
•
•
•
•
Cartridge agent publish event to CC
CC updates instance status in topology
Autoscaler decides to kill it
CEP identify that & publish event to Autoscaler
Autoscaler calls CC to terminate(if available) and remove the instance from
topology
Autoscaler will spawn another to cover that
CEP sends statistics on fault requests to Autoscaler
Autoscaler keep monitoring it and takes a decision to terminate the instance
Autoscaler will spawn another in the same partition to cover that
19