Watch this 55min training session to learn about the main command line tools you’ll be using when working with Tungsten Replicator.
TOPICS COVERED
- Re-cap the previous Installation
- Explore the main Command Line Tools
- tpm
- trepctl
- thl
Training Slides: 104 - Basics - Working With Command Line ToolsContinuent
This 62min training session takes an in-depth look at the command line tools used in conjunction with Tungsten Clustering.
TOPICS COVERED
- Re-cap the previous Installation
- Explore the main Command Line Tools
- tpm
- cctrl
- trepctl
- thl
Troubleshooting containerized triple o deploymentSadique Puthen
This document discusses troubleshooting containerized TripleO deployments. It provides an overview of traditional versus containerized TripleO deployments. Key aspects covered include building container images, registering images, deployment flow, and troubleshooting. It also discusses containerized components in the overcloud including HA pacemaker containers, standalone containers, containerized compute and Ceph nodes, and Neutron containers. Specific troubleshooting steps and files are outlined.
How to Troubleshoot OpenStack Without Losing SleepSadique Puthen
The complex architecture, design, and difficulties while troubleshooting amplifies the effort in debugging a problem with an OpenStack environment. This can give administrators and support associates sleepless nights if OpenStack native and supporting components are not configured properly and tuned for optimum performance, especially with large deployments that involve high availability and load balancing.
- TCP uses congestion control and avoidance to prevent network congestion collapse. It operates in a distributed manner without centralized control.
- TCP's congestion control is based on additive increase, multiplicative decrease (AIMD) and uses a congestion window and packet pacing to smoothly increase and decrease transmission rates in response to packet loss as a signal of congestion.
- The key mechanisms are slow start for initial rapid ramp up, congestion avoidance for gradual increase, fast retransmit for quick recovery from single losses, and timeout for recovery from multiple losses or ack losses. These mechanisms work together to keep TCP stable and efficient under different network conditions.
The document discusses the TCP/IP protocol suite and transport layer services. Some key points:
- TCP/IP was originally developed by DARPA and later included in UNIX. It maps to the OSI layers and supports various physical/data link protocols.
- The transport layer provides logical communication between application processes on different hosts. TCP and UDP are the main transport protocols.
- TCP provides reliable, in-order byte streams using connection establishment and acknowledgments. UDP is a simpler connectionless protocol.
- Port numbers and IP addresses are used to multiplex/demultiplex segments between sockets at hosts for processes to communicate.
- TCP uses a three-way handshake to establish reliable connections between
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Using open source tools for network device dataplane testing.
Our experiences from redGuardian DDoS mitigation scrubber testing.
Presented at PLNOG 20 (2018).
O'Reilly Velocity New York 2016 presentation on modern Linux tracing tools and technology. Highlights the available tracing data sources on Linux (ftrace, perf_events, BPF) and demonstrates some tools that can be used to obtain traces, including DebugFS, the perf front-end, and most importantly, the BCC/BPF tool collection.
Performance Lessons learned in vRouter - Stephen Hemmingerharryvanhaaren
This document summarizes lessons learned from optimizing DPDK performance in a virtual router. It discusses assigning cores to dataplane and control plane tasks, using small sleep intervals based on activity to limit CPU usage, and updating link state and statistics periodically. It recommends techniques like avoiding system calls, mutexes, and spinlocks for performance. Profiling showed the top functions were for packet input, output, and forwarding. Optimizing the longest prefix match for routing was also discussed.
Anatomy of neutron from the eagle eyes of troubelshoortersSadique Puthen
This document summarizes the anatomy of OpenStack Neutron through examples of real-life troubleshooting scenarios. It explores four examples: security group rules not being effective, instances not getting IP addresses from DHCP, floating IP connections randomly failing, and slow provider network communications. For each example, it explains the root cause found by understanding Neutron's architecture and packet flows, and describes the troubleshooting steps taken such as examining logs, monitoring processes, and using tools like tcpdump. The goal is to demonstrate Neutron anatomy and troubleshooting methods rather than just state the problems and solutions.
gRPC is a modern open source RPC framework that enables client and server applications to communicate transparently. It is based on HTTP/2 for its transport mechanism and Protocol Buffers as its interface definition language. Some benefits of gRPC include being fast due to its use of HTTP/2, supporting multiple programming languages, and enabling server push capabilities. However, it also has some downsides such as potential issues with load balancing of persistent connections and requiring external services for service discovery.
TCP provides reliable data transmission through mechanisms like the three-way handshake, congestion control using AIMD, and fast retransmit. However, it is vulnerable to attacks like RST injection to terminate connections or FIN scans to detect open ports. Defenses include randomizing sequence numbers, stateful firewalls to validate packets, and intrusion detection systems to detect scanning behaviors.
Improving the ZFS Userland-Kernel API with Channel Programs - BSDCAN 2017 - M...Matthew Ahrens
The document discusses improving the ZFS userland-kernel API by introducing "channel programs". Channel programs allow complex ZFS operations to be described programmatically and executed atomically in the kernel syncing context. This improves performance, atomicity, and reduces API complexity. Specific examples discussed include cloning filesystesms, recursively destroying datasets, and snapshotting with property listing. The technology is currently being used in the Delphix database virtualization product.
This document discusses several TCP congestion control algorithms: TCP Tahoe, Reno, New Reno, SACK, and Vegas. It provides details on how each algorithm handles slow start, congestion avoidance, fast retransmit, and congestion detection. TCP Vegas is highlighted as being superior to the other algorithms because it can detect and retransmit lost packets faster, has fewer retransmissions, more efficiently measures bandwidth availability, and experiences less congestion overall through proactive congestion detection and modified slow start and congestion avoidance.
Multi tier-app-network-topology-neutron-finalSadique Puthen
This document discusses how Neutron builds network topology for multi-tier applications. It explains that Neutron uses network namespaces to isolate tenant resources and correlate application topology to Neutron components. It provides details on how Neutron creates networks, routers, load balancers, firewalls, and VPN connections to build the necessary infrastructure for a sample multi-tier application topology across two OpenStack sites.
An in depth overview of the possibilities of SNMP. How to monitor your environment using SNMP.
Learn what you can do with SNMP and what SNMP can do for you within one hour. Most aspects of SNMP are addressed. Getting the information, setting values, but also how the information is presented and the difference between the OID and the MIBs.
In this presentation I’m trying to make SNMP “simple” again and understandable for everybody.
Geographically dispersed perconaxtra db cluster deploymentMarco Tusa
Geographically Dispersed Percona XtraDB Cluster Deployment
Percona XtraDB Cluster is a very robust, high performing and widly used solution to answer to High Availability needs. But it can be very challinging when we are in the need to deploy the cluster over a geographically disperse area.
This presentation will briefely discuss what is the right approach to sucessfully deploy PXC when in the need to cover multiple geographical sites, close and far.
- What is PXC and what happens in a set of node when commit
- Let us clarify, geo dispersed
- What to keep in mind then
- how to measure it correctly
- Use the right way (sync/async)
- Use help like replication_manager
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
The document proposes a 3-way site architecture using ETCD consensus protocol for high availability energy management systems. The key advantages are that it allows for fully automated failover between sites without risk of dual masters or data inconsistencies. When the network splits, ETCD elections ensure only one site remains enabled as leader while the others become standby. This prevents multiple enabled instances from overwriting each other's data. The system can also withstand intermittent networks without risk of prolonged data loss.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
Promise of Push (HTTP/2 Web Performance)Colin Bendell
This document discusses HTTP/2 server push and how it can be used to improve web performance. It begins with an overview of existing techniques for pushing content like polling, long polling, pushlets, and server-sent events. It then provides details on how HTTP/2 server push works, including the new PUSH_PROMISE frame that allows the server to push associated resources to the client. It examines the benefits of HTTP/2 push like reduced latency and improved caching as well as challenges around flexibility and complexity compared to other push techniques.
How happy they became with H2O/mruby and the future of HTTPIchito Nagata
The document summarizes the process of migrating the RoomClip image resizing service from Nginx to H2O. Key points include:
- The complex Nginx configuration was difficult to debug and posed security risks. H2O provided better debuggability through Ruby.
- The migration took 1-2 months and involved refactoring image processing out of the web server and into separate Converter processes.
- Benchmarks showed H2O had comparable or better performance than Nginx, with lower latency percentiles and reduced disk and S3 usage.
- Additional benefits included the ability to write unit tests in mruby and new libraries like mruby-rack for running Ruby code on H
eBPF (extended Berkeley Packet Filters) is a modern kernel technology that can be used to introduce dynamic tracing into a system that wasn't prepared or instrumented in any way. The tracing programs run in the kernel, are guaranteed to never crash or hang your system, and can probe every module and function -- from the kernel to user-space frameworks such as Node and Ruby.
In this workshop, you will experiment with Linux dynamic tracing first-hand. First, you will explore BCC, the BPF Compiler Collection, which is a set of tools and libraries for dynamic tracing. Many of your tracing needs will be answered by BCC, and you will experiment with memory leak analysis, generic function tracing, kernel tracepoints, static tracepoints in user-space programs, and the "baked" tools for file I/O, network, and CPU analysis. You'll be able to choose between working on a set of hands-on labs prepared by the instructors, or trying the tools out on your own test system.
Next, you will hack on some of the bleeding edge tools in the BCC toolkit, and build a couple of simple tools of your own. You'll be able to pick from a curated list of GitHub issues for the BCC project, a set of hands-on labs with known "school solutions", and an open-ended list of problems that need tools for effective analysis. At the end of this workshop, you will be equipped with a toolbox for diagnosing issues in the field, as well as a framework for building your own tools when the generic ones do not suffice.
Kernel Recipes 2013 - Deciphering OopsiesAnne Nicolas
The Linux kernel is a very complex beast living in millions of households and data centers around the world. Normally, you’re not supposed to notice its presence but when it gets cranky because of something not suiting it, it spits crazy messages called colloquially
oopses and panics.
In this talk, we’re going to try to understand how to read those messages in order to be able to address its complaints so that it can get back to work for us.
The Flux Capacitor of Kafka Streams and ksqlDB (Matthias J. Sax, Confluent) K...HostedbyConfluent
How does Kafka Streams and ksqlDB reason about time, how does it affect my application, and how do I take advantage of it? In this talk, we explore the "time engine" of Kafka Streams and ksqlDB and answer important questions how you can work with time. What is the difference between sliding, time, and session windows and how do they relate to time? What timestamps are computed for result records? What temporal semantics are offered in joins? And why does the suppress() operator not emit data? Besides answering those questions, we will share tips and tricks how you can "bend" time to your needs and when mixing event-time and processing-time semantics makes sense. Six month ago, the question "What's the time? …and Why?" was asked and partly answered at Kafka Summit in San Francisco, focusing on writing data, data storage and retention, as well as consuming data. In this talk, we continue our journey and delve into data stream processing with Kafka Streams and ksqlDB, that both offer rich time semantics. At the end of the talk, you will be well prepared to process past, present, and future data with Kafka Streams and ksqlDB.
Openstack on Fedora, Fedora on Openstack: An Introduction to cloud IaaSSadique Puthen
Openstack is an open source cloud operating system that provides infrastructure as a service capabilities. It includes components for compute (Nova), storage (Cinder, Swift, Manila), networking (Neutron), orchestration (Heat), metering (Ceilometer), and dashboard (Horizon). The document discusses these components in depth and how they provide infrastructure services. It also covers deployment options like Packstack, TripleO, and Ironic as well as other Openstack projects. The presentation introduces Openstack and its capabilities and components.
Imagine you're tackling one of these evasive performance issues in the field, and your go-to monitoring checklist doesn't seem to cut it. There are plenty of suspects, but they are moving around rapidly and you need more logs, more data, more in-depth information to make a diagnosis. Maybe you've heard about DTrace, or even used it, and are yearning for a similar toolkit, which can plug dynamic tracing into a system that wasn't prepared or instrumented in any way.
Hopefully, you won't have to yearn for a lot longer. eBPF (extended Berkeley Packet Filters) is a kernel technology that enables a plethora of diagnostic scenarios by introducing dynamic, safe, low-overhead, efficient programs that run in the context of your live kernel. Sure, BPF programs can attach to sockets; but more interestingly, they can attach to kprobes and uprobes, static kernel tracepoints, and even user-mode static probes. And modern BPF programs have access to a wide set of instructions and data structures, which means you can collect valuable information and analyze it on-the-fly, without spilling it to huge files and reading them from user space.
In this talk, we will introduce BCC, the BPF Compiler Collection, which is an open set of tools and libraries for dynamic tracing on Linux. Some tools are easy and ready to use, such as execsnoop, fileslower, and memleak. Other tools such as trace and argdist require more sophistication and can be used as a Swiss Army knife for a variety of scenarios. We will spend most of the time demonstrating the power of modern dynamic tracing -- from memory leaks to static probes in Ruby, Node, and Java programs, from slow file I/O to monitoring network traffic. Finally, we will discuss building our own tools using the Python and Lua bindings to BCC, and its LLVM backend.
Use perl creating web services with xml rpcJohnny Pork
This document discusses using XML-RPC to create web services with Perl. It provides an example of creating an XML-RPC client in Perl to call methods on a remote web service. It also gives an example of building an XML-RPC listener service in Perl to expose methods to remote clients. The document stresses the importance of documenting the API of any XML-RPC web service.
Troubleshooting common oslo.messaging and RabbitMQ issuesMichael Klishin
This document discusses common issues with oslo.messaging and RabbitMQ and how to diagnose and resolve them. It provides an overview of oslo.messaging and how it uses RabbitMQ for RPC calls and notifications. Examples are given of where timeouts could occur in RPC calls. Methods for debugging include enabling debug logging, examining RabbitMQ queues and connections, and correlating logs from services. Specific issues covered include RAM usage, unresponsive nodes, rejected TCP connections, TLS connection failures, and high latency. General tips emphasized are using tools to gather data and consulting log files.
O'Reilly Velocity New York 2016 presentation on modern Linux tracing tools and technology. Highlights the available tracing data sources on Linux (ftrace, perf_events, BPF) and demonstrates some tools that can be used to obtain traces, including DebugFS, the perf front-end, and most importantly, the BCC/BPF tool collection.
Performance Lessons learned in vRouter - Stephen Hemmingerharryvanhaaren
This document summarizes lessons learned from optimizing DPDK performance in a virtual router. It discusses assigning cores to dataplane and control plane tasks, using small sleep intervals based on activity to limit CPU usage, and updating link state and statistics periodically. It recommends techniques like avoiding system calls, mutexes, and spinlocks for performance. Profiling showed the top functions were for packet input, output, and forwarding. Optimizing the longest prefix match for routing was also discussed.
Anatomy of neutron from the eagle eyes of troubelshoortersSadique Puthen
This document summarizes the anatomy of OpenStack Neutron through examples of real-life troubleshooting scenarios. It explores four examples: security group rules not being effective, instances not getting IP addresses from DHCP, floating IP connections randomly failing, and slow provider network communications. For each example, it explains the root cause found by understanding Neutron's architecture and packet flows, and describes the troubleshooting steps taken such as examining logs, monitoring processes, and using tools like tcpdump. The goal is to demonstrate Neutron anatomy and troubleshooting methods rather than just state the problems and solutions.
gRPC is a modern open source RPC framework that enables client and server applications to communicate transparently. It is based on HTTP/2 for its transport mechanism and Protocol Buffers as its interface definition language. Some benefits of gRPC include being fast due to its use of HTTP/2, supporting multiple programming languages, and enabling server push capabilities. However, it also has some downsides such as potential issues with load balancing of persistent connections and requiring external services for service discovery.
TCP provides reliable data transmission through mechanisms like the three-way handshake, congestion control using AIMD, and fast retransmit. However, it is vulnerable to attacks like RST injection to terminate connections or FIN scans to detect open ports. Defenses include randomizing sequence numbers, stateful firewalls to validate packets, and intrusion detection systems to detect scanning behaviors.
Improving the ZFS Userland-Kernel API with Channel Programs - BSDCAN 2017 - M...Matthew Ahrens
The document discusses improving the ZFS userland-kernel API by introducing "channel programs". Channel programs allow complex ZFS operations to be described programmatically and executed atomically in the kernel syncing context. This improves performance, atomicity, and reduces API complexity. Specific examples discussed include cloning filesystesms, recursively destroying datasets, and snapshotting with property listing. The technology is currently being used in the Delphix database virtualization product.
This document discusses several TCP congestion control algorithms: TCP Tahoe, Reno, New Reno, SACK, and Vegas. It provides details on how each algorithm handles slow start, congestion avoidance, fast retransmit, and congestion detection. TCP Vegas is highlighted as being superior to the other algorithms because it can detect and retransmit lost packets faster, has fewer retransmissions, more efficiently measures bandwidth availability, and experiences less congestion overall through proactive congestion detection and modified slow start and congestion avoidance.
Multi tier-app-network-topology-neutron-finalSadique Puthen
This document discusses how Neutron builds network topology for multi-tier applications. It explains that Neutron uses network namespaces to isolate tenant resources and correlate application topology to Neutron components. It provides details on how Neutron creates networks, routers, load balancers, firewalls, and VPN connections to build the necessary infrastructure for a sample multi-tier application topology across two OpenStack sites.
An in depth overview of the possibilities of SNMP. How to monitor your environment using SNMP.
Learn what you can do with SNMP and what SNMP can do for you within one hour. Most aspects of SNMP are addressed. Getting the information, setting values, but also how the information is presented and the difference between the OID and the MIBs.
In this presentation I’m trying to make SNMP “simple” again and understandable for everybody.
Geographically dispersed perconaxtra db cluster deploymentMarco Tusa
Geographically Dispersed Percona XtraDB Cluster Deployment
Percona XtraDB Cluster is a very robust, high performing and widly used solution to answer to High Availability needs. But it can be very challinging when we are in the need to deploy the cluster over a geographically disperse area.
This presentation will briefely discuss what is the right approach to sucessfully deploy PXC when in the need to cover multiple geographical sites, close and far.
- What is PXC and what happens in a set of node when commit
- Let us clarify, geo dispersed
- What to keep in mind then
- how to measure it correctly
- Use the right way (sync/async)
- Use help like replication_manager
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
The document proposes a 3-way site architecture using ETCD consensus protocol for high availability energy management systems. The key advantages are that it allows for fully automated failover between sites without risk of dual masters or data inconsistencies. When the network splits, ETCD elections ensure only one site remains enabled as leader while the others become standby. This prevents multiple enabled instances from overwriting each other's data. The system can also withstand intermittent networks without risk of prolonged data loss.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
Promise of Push (HTTP/2 Web Performance)Colin Bendell
This document discusses HTTP/2 server push and how it can be used to improve web performance. It begins with an overview of existing techniques for pushing content like polling, long polling, pushlets, and server-sent events. It then provides details on how HTTP/2 server push works, including the new PUSH_PROMISE frame that allows the server to push associated resources to the client. It examines the benefits of HTTP/2 push like reduced latency and improved caching as well as challenges around flexibility and complexity compared to other push techniques.
How happy they became with H2O/mruby and the future of HTTPIchito Nagata
The document summarizes the process of migrating the RoomClip image resizing service from Nginx to H2O. Key points include:
- The complex Nginx configuration was difficult to debug and posed security risks. H2O provided better debuggability through Ruby.
- The migration took 1-2 months and involved refactoring image processing out of the web server and into separate Converter processes.
- Benchmarks showed H2O had comparable or better performance than Nginx, with lower latency percentiles and reduced disk and S3 usage.
- Additional benefits included the ability to write unit tests in mruby and new libraries like mruby-rack for running Ruby code on H
eBPF (extended Berkeley Packet Filters) is a modern kernel technology that can be used to introduce dynamic tracing into a system that wasn't prepared or instrumented in any way. The tracing programs run in the kernel, are guaranteed to never crash or hang your system, and can probe every module and function -- from the kernel to user-space frameworks such as Node and Ruby.
In this workshop, you will experiment with Linux dynamic tracing first-hand. First, you will explore BCC, the BPF Compiler Collection, which is a set of tools and libraries for dynamic tracing. Many of your tracing needs will be answered by BCC, and you will experiment with memory leak analysis, generic function tracing, kernel tracepoints, static tracepoints in user-space programs, and the "baked" tools for file I/O, network, and CPU analysis. You'll be able to choose between working on a set of hands-on labs prepared by the instructors, or trying the tools out on your own test system.
Next, you will hack on some of the bleeding edge tools in the BCC toolkit, and build a couple of simple tools of your own. You'll be able to pick from a curated list of GitHub issues for the BCC project, a set of hands-on labs with known "school solutions", and an open-ended list of problems that need tools for effective analysis. At the end of this workshop, you will be equipped with a toolbox for diagnosing issues in the field, as well as a framework for building your own tools when the generic ones do not suffice.
Kernel Recipes 2013 - Deciphering OopsiesAnne Nicolas
The Linux kernel is a very complex beast living in millions of households and data centers around the world. Normally, you’re not supposed to notice its presence but when it gets cranky because of something not suiting it, it spits crazy messages called colloquially
oopses and panics.
In this talk, we’re going to try to understand how to read those messages in order to be able to address its complaints so that it can get back to work for us.
The Flux Capacitor of Kafka Streams and ksqlDB (Matthias J. Sax, Confluent) K...HostedbyConfluent
How does Kafka Streams and ksqlDB reason about time, how does it affect my application, and how do I take advantage of it? In this talk, we explore the "time engine" of Kafka Streams and ksqlDB and answer important questions how you can work with time. What is the difference between sliding, time, and session windows and how do they relate to time? What timestamps are computed for result records? What temporal semantics are offered in joins? And why does the suppress() operator not emit data? Besides answering those questions, we will share tips and tricks how you can "bend" time to your needs and when mixing event-time and processing-time semantics makes sense. Six month ago, the question "What's the time? …and Why?" was asked and partly answered at Kafka Summit in San Francisco, focusing on writing data, data storage and retention, as well as consuming data. In this talk, we continue our journey and delve into data stream processing with Kafka Streams and ksqlDB, that both offer rich time semantics. At the end of the talk, you will be well prepared to process past, present, and future data with Kafka Streams and ksqlDB.
Openstack on Fedora, Fedora on Openstack: An Introduction to cloud IaaSSadique Puthen
Openstack is an open source cloud operating system that provides infrastructure as a service capabilities. It includes components for compute (Nova), storage (Cinder, Swift, Manila), networking (Neutron), orchestration (Heat), metering (Ceilometer), and dashboard (Horizon). The document discusses these components in depth and how they provide infrastructure services. It also covers deployment options like Packstack, TripleO, and Ironic as well as other Openstack projects. The presentation introduces Openstack and its capabilities and components.
Imagine you're tackling one of these evasive performance issues in the field, and your go-to monitoring checklist doesn't seem to cut it. There are plenty of suspects, but they are moving around rapidly and you need more logs, more data, more in-depth information to make a diagnosis. Maybe you've heard about DTrace, or even used it, and are yearning for a similar toolkit, which can plug dynamic tracing into a system that wasn't prepared or instrumented in any way.
Hopefully, you won't have to yearn for a lot longer. eBPF (extended Berkeley Packet Filters) is a kernel technology that enables a plethora of diagnostic scenarios by introducing dynamic, safe, low-overhead, efficient programs that run in the context of your live kernel. Sure, BPF programs can attach to sockets; but more interestingly, they can attach to kprobes and uprobes, static kernel tracepoints, and even user-mode static probes. And modern BPF programs have access to a wide set of instructions and data structures, which means you can collect valuable information and analyze it on-the-fly, without spilling it to huge files and reading them from user space.
In this talk, we will introduce BCC, the BPF Compiler Collection, which is an open set of tools and libraries for dynamic tracing on Linux. Some tools are easy and ready to use, such as execsnoop, fileslower, and memleak. Other tools such as trace and argdist require more sophistication and can be used as a Swiss Army knife for a variety of scenarios. We will spend most of the time demonstrating the power of modern dynamic tracing -- from memory leaks to static probes in Ruby, Node, and Java programs, from slow file I/O to monitoring network traffic. Finally, we will discuss building our own tools using the Python and Lua bindings to BCC, and its LLVM backend.
Use perl creating web services with xml rpcJohnny Pork
This document discusses using XML-RPC to create web services with Perl. It provides an example of creating an XML-RPC client in Perl to call methods on a remote web service. It also gives an example of building an XML-RPC listener service in Perl to expose methods to remote clients. The document stresses the importance of documenting the API of any XML-RPC web service.
Troubleshooting common oslo.messaging and RabbitMQ issuesMichael Klishin
This document discusses common issues with oslo.messaging and RabbitMQ and how to diagnose and resolve them. It provides an overview of oslo.messaging and how it uses RabbitMQ for RPC calls and notifications. Examples are given of where timeouts could occur in RPC calls. Methods for debugging include enabling debug logging, examining RabbitMQ queues and connections, and correlating logs from services. Specific issues covered include RAM usage, unresponsive nodes, rejected TCP connections, TLS connection failures, and high latency. General tips emphasized are using tools to gather data and consulting log files.
Training Slides: 202 - Monitoring & TroubleshootingContinuent
Learn all you need to know in this 43min training session about the ins and outs of cluster health monitoring and what tools to use to identify issues, using the logs to understand them better as well as some best practices on how to resolve them.
TOPICS COVERED
Discuss tools used to monitor cluster health
Discuss tools used to identify issues
How to get more information about issues using the logs
Resolve common replication issues
Resolve common clustering issues
Get more information about replication lag
[Altibase] 12 replication part5 (optimization and monitoring)altistory
The document discusses considerations for optimizing ALTIBASE HDB replication. Key points include optimizing the network, transaction tuning including limiting bulk DML and UPDATE operations, hardware requirements like bandwidth and network cards, software configuration, designing replication objects, handling partial failures and locks, replication sender and receiver tuning, and monitoring replication using internal meta tables and performance views.
Container Orchestration from Theory to PracticeDocker, Inc.
Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using Docker’s SwarmKit as a real-world example. Gain a deeper understanding of how orchestration systems like SwarmKit work in practice and walk away with more insights into your production applications.
This document provides an overview of stream processing. It discusses how stream processing systems are used to process large volumes of real-time data continuously and produce actionable information. Examples of applications discussed include traffic monitoring, network monitoring, smart grids, and sensor networks. Key concepts of stream processing covered include data streams, operators, windows, programming models, fault tolerance, and platforms like Storm and Spark Streaming.
An introduction to_rac_system_test_planning_methodsAjith Narayanan
This document provides an overview and agenda for testing an Oracle Real Application Clusters (RAC) system. It outlines 10 tests to validate that the RAC system is installed and configured correctly, and to verify basic functionality and the system's ability to achieve high availability and performance objectives. The tests include planned node reboots, unplanned node failures, instance failures, and network failures. Metrics like failover time, recovery time, and downtime are proposed to measure success.
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceScyllaDB
In this talk I will walk you through the performance tuning steps that I took to serve 1.2M JSON requests per second from a 4 vCPU c5 instance, using a simple API server written in C.
At the start of the journey the server is capable of a very respectable 224k req/s with the default configuration. Along the way I made extensive use of tools like FlameGraph and bpftrace to measure, analyze, and optimize the entire stack, from the application framework, to the network driver, all the way down to the kernel.
I began this wild adventure without any prior low-level performance optimization experience; but once I started going down the performance tuning rabbit-hole, there was no turning back. Fueled by my curiosity, willingness to learn, and relentless persistence, I was able to boost performance by over 400% and reduce p99 latency by almost 80%.
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...ScyllaDB
In this session, Tanel introduces a new open source eBPF tool for efficiently sampling both on-CPU events and off-CPU events for every thread (task) in the OS. Linux standard performance tools (like perf) allow you to easily profile on-CPU threads doing work, but if we want to include the off-CPU timing and reasons for the full picture, things get complicated. Combining eBPF task state arrays with periodic sampling for profiling allows us to get both a system-level overview of where threads spend their time, even when blocked and sleeping, and allow us to drill down into individual thread level, to understand why.
This document discusses resource managers and the REEF (Retainable Evaluation Execution Framework) system. It notes that resource managers allow for true multi-tenancy with many workloads and users, but are typically only suitable for sophisticated applications. It then provides examples of machine learning, graph processing, and SQL/MapReduce workloads. The document outlines some of the challenges of using software "silos" and how REEF aims to avoid silos by allowing different computation models to be composed together using a common resource manager and distributed file system (DFS). It provides high-level overviews of the REEF control flow and data plane architectures.
This document discusses parameters for tuning the performance of WebLogic servers. It covers OS-level TCP parameters, JVM heap size and GC logging parameters, WebLogic server-level parameters like work managers, execute queues, and stuck threads, and JDBC and JMS pool parameters. It also provides an overview of different types of garbage collection in the HotSpot JVM.
SAP HANA System Replication - Setup, Operations and HANA MonitoringLinh Nguyen
SAP HANA Distributed System Replication setup, operations and associated HANA Monitoring of Disaster Recovery (DR) scenario using OZSOFT HANA Management Pack for SCOM
Network and TCP performance relationship workshopKae Hsu
The document discusses TCP performance factors and techniques to improve TCP performance in network environments. It covers TCP operation principles, factors that impact TCP performance like packet loss, out-of-order packets, and congestion. It also discusses approaches to improve performance through the network like reducing packet loss and congestion, and through appliances like TCP offloading and optimization to reduce system resource usage.
Tempesta FW - Framework и Firewall для WAF и DDoS mitigation, Александр Крижа...Ontico
Tempesta FW is an open source firewall and framework for HTTP DDoS mitigation and web application firewall capabilities. It functions at layers 3 through 7 and directly embeds into the Linux TCP/IP stack. As a hybrid of an HTTP accelerator and firewall, it aims to accelerate content delivery to mitigate DDoS attacks while filtering requests. This allows it to more effectively mitigate application layer DDoS attacks compared to other solutions like deep packet inspection or traditional firewalls and HTTP servers.
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverterAnne Nicolas
NDIV is a young, very simple, yet efficient network traffic diverter. Its purpose is to help build network applications that intercept packets at line rate with a very low processing overhead. A first example application is a stateless HTTP server reaching line rate on all packet sizes.
Willy Tarreau, HaproxyTech
The document discusses the Reactor Pattern and Event-Driven Programming using EventMachine and Thin as examples. It provides an overview of how Thin and EventMachine work together using the Reactor Pattern to provide scalable concurrent networking. Key aspects covered include how EventMachine acts as a reactor that handles events asynchronously using threads, and how Thin integrates with EventMachine by registering request handlers and processing requests concurrently.
Container orchestration from theory to practiceDocker, Inc.
"Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using SwarmKit and Kubernetes as a real-world example. Gain a deeper understanding of how orchestration systems work in practice and walk away with more insights into your production applications."
Training Slides: Intermediate 202: Performing Cluster Maintenance with Zero-D...Continuent
Join us for this intermediate training session as we explore how to leverage the power of the Tungsten Clustering to perform database and OS maintenance with zero-downtime. This training is for anyone new to Continuent without prior experience, but will also serve as a wonderful refresher for any current users. Basic MySQL knowledge is assumed.
AGENDA
- Review the cluster architecture
- Describe the rolling maintenance process
- Explore what happens during a master switch
- Discuss cluster states
- Demonstrate rolling maintenance
- Re-cap commands and resources used during the demo
Chicago Flink Meetup: Flink's streaming architectureRobert Metzger
This document summarizes the architecture of Apache Flink's streaming runtime. Flink is a stream processor that embraces the streaming nature of data with low latency, high throughput, and exactly-once guarantees. It achieves this through pipelining to keep data moving efficiently and distributed snapshots for fault tolerance. Flink also supports batch processing as a special case of streaming by running bounded streams as a single global window.
Tungsten Webinar: v6 & v7 Release Recap, and BeyondContinuent
In this webinar, our Customer Success Directors, Matthew Lang and Chris Parker present a recap of our v6 and v7 releases. Exploring the newer features of v7 and also a preview of what to expect in forthcoming releases over the next year.
AGENDA
v6 Patch Releases
v7 Release
- v7 Patch Releases
New Feature overview
- API & Security Changes
- Dynamic Active/Active (DAA)
- Distributed Datasource Groups (DDG)
- Connector in Docker
- Backup & Recovery updates
- Dashboard
- Additional Features & Enhancements
Coming Soon
SPEAKERS
Matthew Lang - Director of Customer Success at Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scalable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Chris Parker - Director of Customer Success at Continuent - is based in the UK, and has over 20 years of experience working as a database administrator. Prior to joining Continuent, Chris managed large-scale Oracle and MySQL deployments at Warner Bros., BBC, and prior to joining the Continuent Team, he worked at the online fashion company, Net-A-Porter.
Continuent Tungsten Value Proposition WebinarContinuent
Continuent Tungsten Clustering is positioned as a reliable and comprehensive solution for enterprise MySQL database management, addressing critical business needs for continuous operations and global scalability.
This webinar provides a clear understanding of the Continuent Tungsten Clustering value proposition — what makes our enterprise-grade software worth the cost?
It highlights Tungsten Clustering's extensive capabilities, such as high availability and disaster recovery through automatic local and global failover, zero-downtime operations and read/write splitting to name a few.
Continuent also provides the best 24/7 support, has great resources including documentation, blog posts, webinars, and a highly experienced team ready to help and guide you the entire way from concept to production.
AGENDA
Continuent Tungsten Value Proposition
Continuent Tungsten: Clustering 101
Continuent Tungsten: Local Clustering
Continuent Tungsten: Global Clustering
Tungsten Clustering Value: Availability
Tungsten Clustering Value: Scalability
Tungsten Clustering Demo: Global Active/Passive
The Wrap Up and Contact Information
SPEAKER
Eric M Stone, COO and VP of Product Management at Continuent, is a veteran of fast-paced, large-scale enterprise environments with 35 years of Information Technology experience. With a focus on HA/DR, from building data centers and trading floors to world-wide deployments, Eric has architected, coded, deployed and administered systems for a wide variety of disparate customers, from Fortune 500 financial institutions to SMB’s.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #7: ClusterControlContinuent
Severalnines’ ClusterControl vs. Continuent Tungsten Clusters for MySQL
Building a Geo-Distributed, Multi-Region and Highly Available MySQL Cloud Back-End
This is the seventh of our High Noon series covering MySQL clustering solutions for high availability (HA), disaster recovery (DR), and geographic distribution.
ClusterControl uses Galera to handle the MySQL clustering, which means it uses synchronous replication. Learn in this webinar!
You may use Tungsten Clustering with native MySQL, MariaDB or Percona Server for MySQL in GCP, AWS, Azure, and/or on-premises data centers for better technological capabilities, control, and flexibility. But learn about the pros and cons!
AGENDA
- Goals for the High Noon Webinar Series
- High Noon Series: Tungsten Clustering vs Others
- Oracle InnoDB Cluster
- Key Characteristics
- Certification-based Replication
- InnoDB Cluster Multi-Site Requirements
- Limitations Using InnoDB Cluster
- How to do better MySQL HA / DR / Geo-Distribution?
- InnoDB Cluster vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #5: Oracle’s InnoDB ClusterContinuent
Oracle’s InnoDB Cluster vs. Continuent Tungsten Clusters for MySQL
Building a Geo-Distributed, Multi-Region and Highly Available MySQL Cloud Back-End
This is the fifth of our High Noon series covering MySQL clustering solutions for high availability (HA), disaster recovery (DR), and geographic distribution.
InnoDB Cluster uses MySQL’s group replication to handle the replication. It’s also known as semi-synchronous replication. Learn about this and more in this webinar!
You may use Tungsten Clustering with native MySQL, MariaDB or Percona Server for MySQL in GCP, AWS, Azure, and/or on-premises data centers for better technological capabilities, control, and flexibility. But learn about the pros and cons!
AGENDA
- Goals for the High Noon Webinar Series
- High Noon Series: Tungsten Clustering vs Others
- Oracle InnoDB Cluster
- Key Characteristics
- Certification-based Replication
- InnoDB Cluster Multi-Site Requirements
- Limitations Using InnoDB Cluster
- How to do better MySQL HA / DR / Geo-Distribution?
- InnoDB Cluster vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #4: MS Azure Database MySQLContinuent
MS Azure Database for MySQL vs. Continuent Tungsten Clusters
Building a Geo-Scale, Multi-Region and Highly Available MySQL Cloud Back-End
This is the third of our High Noon series covering MySQL clustering solutions for high availability (HA), disaster recovery (DR), and geographic distribution.
Azure Database for MySQL is a managed database cluster within Microsoft Azure Cloud that runs MySQL community edition. There are really two deployment options: “Single Server” and “Flexible Server (Preview).” We will look at the Flexible Server version, even though it is still preview, because most enterprise applications require failover, so this is the relevant comparison for Tungsten Clustering.
You may use Tungsten Clustering with native MySQL, MariaDB or Percona Server for MySQL in GCP, AWS, Azure, and/or on-premises data centers for better technological capabilities, control, and flexibility. But learn about the pros and cons!
Enjoy the webinar!
AGENDA
- Goals for the High Noon Webinar Series
- High Noon Series: Tungsten Clustering vs Others
- Microsoft Azure Database for MySQL
- Key Characteristics
- Certification-based Replication
- Azure MySQL Multi-Site Requirements
- Limitations Using Azure MySQL
- How to do better MySQL HA / DR / Geo-Scale?
- Azure MySQL vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera ClusterContinuent
Galera Cluster vs. Continuent Tungsten Clusters
Building a Geo-Scale, Multi-Region and Highly Available MySQL Cloud Back-End
This second installment of our High Noon series of on-demand webinars is focused on Galera Cluster (including MariaDB Cluster & Percona XtraDB Cluster). It looks at some of the key characteristics of Galera Cluster and how it fares as a MySQL HA / DR / Geo-Scale solution, especially when compared to Continuent Tungsten Clustering.
Watch this webinar to learn how to do better MySQL HA / DR / Geo-Scale.
AGENDA
- Goals for the High Noon Webinar Series
- High Noon Series: Tungsten Clustering vs Others
- Galera Cluster (aka MariaDB Cluster & Percona XtraDB Cluster)
- Key Characteristics
- Certification-based Replication
- Galera Multi-Site Requirements
- Limitations Using Galera Cluster
- How to do better MySQL HA / DR / Geo-Scale?
- Galera Cluster vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #1: AWS AuroraContinuent
AWS Aurora vs. Continuent Tungsten Clusters
Building a Geo-Scale, Multi-Region and Highly Available MySQL Cloud Back-End
This first installment of our High Noon series of on-demand webinars is focused on AWS Aurora. It looks at some of the key characteristics of AWS Aurora and how it fares as a MySQL HA / DR / Geo-Scale solution, especially when compared to Continuent Tungsten Clustering.
Watch this webinar to learn how to do better MySQL HA / DR / Geo-Scale.
AGENDA
- Goals for the High Noon Webinar Series
- AWS Aurora
- Key Characteristics
- Cross Region Requirements
- RDS Proxy
- Limitations Using AWS Aurora
- How to do better MySQL HA / DR / Geo-Scale?
- AWS Aurora vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Webinar Slides: AWS Aurora MySQL Replacement: Break Away From Geo-Limitations...Continuent
Samsung's ARTIK IoT platform used Continuent's active/active MySQL cluster topology to manage its IoT monetization portal serving millions of worldwide devices. ARTIK chose Continuent over AWS Aurora for its availability, disaster recovery, geo-scale capabilities, and cost-effectiveness. Continuent's solution provided high availability, continuous operations across regions, and performance at a reasonable cost with 24/7 support.
Webinar Slides: No Data Loss MySQL: Guaranteed Credit Card Transaction Availa...Continuent
Cloud-Based Active/Active Tungsten MySQL Clusters @ Bluefin Payments
Bluefin Payments is a Financial Services SaaS company that provides 24/7/365 application availability for their payment gateway and decryption-as-a-service, which are essential to point-of-sale (POS) solutions.
Financial Services typically require two or more active data centers to provide their customers with continuous availability along with quick response times. Bluefin Payments uses co-located data centers with active/active replication between each MySQL cluster, which provides a complete, local High Availability and a remote Disaster Recovery solution for more than 350 million financial transactions each month.
Watch this webinar replay with Continuent Eero Teerikorpi for a discussion about geo-distributed active/active MySQL replication for Financial Services SaaS Providers based on a case study of Continuent customer Bluefin Payments, and on how to guarantee credit card transaction availability with geo-distributed Tungsten MySQL clusters.
AGENDA
- Continuent Introduction
- How to Guarantee Credit Card Transaction Availability With Geo-Distributed Tungsten MySQL Clusters
- Continuent Tungsten Solutions & Benefits
- Key Benefit Highlight: No MySQL Data Loss
- Q&A
PRESENTER
Eero Teerikorpi - Founder and CEO, Continuent - is a 7-time serial entrepreneur who has more than 30 years of high-tech management and enterprise software experience. Eero has been in the MySQL marketplace virtually since day one, from the early 2000s. Eero has held top management positions at various cross-Atlantic entities (CEO at Alcom Corporation, President at Capslock, Executive Board Member at Esker S.A.) Eero started his career as a Product Manager at Apple Computer in Finland in the mid-80s. Eero also owns and manages a boutique NOET Vineyards producing high-quality dry-farmed Cabernet Sauvignon.
Eero is a former Navy officer and still an avid sailor on San Francisco Bay and around the world. Eero is a very active sportsman: a 4+ tennis player, a rookie golfer, a very careful mountain biker, and an experienced (40+ years) skier, both slalom and cross-country.
This webinar by our proxies guru, Gilles Reyrat, is the second installment of our database proxies webinar series and follows ‘Introduction to Database Proxies’. This second session looks at two breeds of proxies: the “fast & simple proxy” (aka 4 layer proxy) and the “intelligent proxy”; and takes a closer look at transparent failover as well as read-write splitting.
AGENDA
- Recap: Introduction to Database Proxies
- Two Breeds of Proxies
- The Fast & Simple Proxy (4 Layer Proxy)
- The Intelligent Proxy
- Transparent Failover
- Connection / Failure / Failover / Reconnection
- Read-Write Splitting
- 4 Layer Proxies R/W Split
- With Intelligent Proxies
- SQL Parsing / Application Driven / Smart Scale
PRESENTER
Gilles Rayrat - VP of Engineering, Continuent - has over 20 years experience in software engineering. Previously holding positions at Orange and Xerox, he joined the Continuent adventure in 2005. As the connectivity expert at Continuent, he has worn many hats including software development, QA, support, project and operations management. Gilles has held most of the engineering positions that he now manages, giving him both deep and wide experience.
Webinar Slides: High Volume MySQL HA: SaaS Continuous Operations with Terabyt...Continuent
Large Number of On-premises Tungsten MySQL Clusters @ Marketo
Marketo is a very large marketing automation SaaS provider. Marketo scaled from tens of customers back in 2010 to thousands of enterprise customers today using Tungsten Clustering and several hundreds of MySQL instances.
In this webinar, Continuent CEO Eero Teerikorpi discusses some common challenges SaaS providers face, such as having to provide 24/7/365 operations with zero downtime, even during maintenance operations. In addition, SaaS providers need to have an easy, consistent, and cost-effective model to scale.
Watch this webinar replay to learn how to guarantee continuous operations for a SaaS provider with billions of daily transactions and terabytes of data using Tungsten MySQL Clusters.
AGENDA
- Continuent Introduction
- How to Guarantee Continuous Operations for a SaaS with Terabytes Data with Tungsten MySQL Clusters
- Continuent Tungsten Solutions & Benefits
- Key Benefit Highlight: Billions of MySQL Transactions, Very Large Data Volume
- Q&A
PRESENTER
Eero Teerikorpi - founder and CEO, Continuent - is a 7-time serial entrepreneur who has more than 30 years of high-tech management and enterprise software experience. Eero has been in the MySQL marketplace virtually since day one, from the early 2000s. Eero has held top management positions at various cross-Atlantic entities (CEO at Alcom Corporation, President at Capslock, Executive Board Member at Esker S.A.) Eero started his career as a Product Manager at Apple Computer in Finland in the mid-80s. Eero also owns and manages a boutique NOET Vineyards producing high-quality dry-farmed Cabernet Sauvignon.
Eero is a former Navy officer and still an avid sailor on San Francisco Bay and around the world. Eero is a very active sportsman: a 4+ tennis player, a rookie golfer, a very careful mountain biker, and an experienced (40+ years) skier, both slalom and cross-country.
Training Slides: 205 - Installing and Configuring Tungsten DashboardContinuent
This training session introduces Tungsten Dashboard from installation to configuration in a demo type format. Tungsten Dashboard is the ideal tool for cluster maintenance and this training demonstrates how.
TOPICS COVERED
- Present the Dashboard
- Cluster Maintenance with the Dashboard
- How to Install the dashboard
Training Slides: 352 - Tungsten Replicator for MongoDB & KafkaContinuent
This document provides an overview of replicating data from MySQL to MongoDB and Kafka using Tungsten Replicator. It reviews the replicator workflow, prerequisites for MongoDB and Kafka targets, and sample configurations. The presenter demonstrates Tungsten Replicator's ability to extract data from MySQL, convert rows to messages with metadata, and apply the messages to MongoDB collections or Kafka topics. Requirements like user accounts, ports, and Zookeeper configuration are discussed.
Training Slides: 351 - Tungsten Replicator for Data WarehousesContinuent
Follow this 36min training session that looks at using Tungsten Replicator with data warehouse targets such as Hadoop, Redshift and Vertica in particular, including a demo showing how to set these configurations up.
TOPICS COVERED
- Review replicator flow
- Explore Hadoop, Redshift and Vertica specific prerequisites
- Review configurations
- Demo
Training Slides: 303 - Replicating out of a ClusterContinuent
Watch this 33min training on how to replicate out of your cluster using the standalone Replicator. This covers a walkthrough of what a Cluster Extractor is and what you can do with it, including a demonstration on how to install it.
TOPICS COVERED
- Explore the Cluster Extractor
- Review possible targets
- Discuss Use Cases
- Demonstrate an installation
Training Slides: 206 - Using the Tungsten Cluster AMIContinuent
In this 38min training session, we’re looking at how to use the Tungsten Cluster AMI on Amazon with a recap of the different Tungsten Cluster topologies that are available, followed by a walkthrough of what the Tungsten Cluster AMI is, how it works and how to avail of it.
TOPICS COVERED
- Recap Tungsten Cluster Topologies
- Explore the Tungsten Cluster AMI
- Manual deploy and configure
- Cloud Formation Deployment
Training Slides: 254 - Using the Tungsten Replicator AMIContinuent
This 30min training session walks you through what the Tungsten Replicator AMI, how to avail of it and how to use it, including a “live” demo of what it actually looks like.
TOPICS COVERED
- Recap and Review sources and targets
- What is the Tungsten Replicator AMI? (And what it isn’t!)
- How to configure Tungsten Replicator AMI?
- FAQ’s
Training Slides: 253 - Filter like a ProContinuent
This training session covers the different aspects involved in filtering, including a recap of replication stages and pipeline, as well as what the different types of available filters are … and more. This session takes an hour and twenty minutes.
TOPICS COVERED
- Filtering Discussion
- Recap Replication Stages and Pipelines
- Look at available Filters
- Example configurations and Use Cases
- Enabling Filters
- Custom Filters
Training Slides: 252 - Monitoring & TroubleshootingContinuent
Learn in the next 57min about Tungsten Replicator monitoring and troubleshooting, including an overview of some common issues, triggers, failures (and how to deal with them) as well as some tips & tricks on monitoring.
TOPICS COVERED
- Discuss Monitoring & Troubleshooting
- Review Common Issues
- Triggers
- Finding and Understanding Log Files
- Handling (And Recovering From) Failures
- Skipping Transactions
- Resetting Scripts for Monitoring
- Scripts for Monitoring
Training Slides: 302 - Securing Your Cluster With SSLContinuent
This document discusses securing a Tungsten cluster with SSL. It explains what SSL is and why it is used. It then covers deploying SSL for cluster communications and for the Tungsten connector. For the cluster, SSL is enabled in tungsten.ini and certificates are generated and distributed. For the connector in proxy mode, MySQL certificates must be imported into keystores and SSL configured from the connector to the database. SSL can also be configured from the application to the connector. Successful SSL encryption is verified using tcpdump and checking the Tungsten connection status. The next steps will cover the Tungsten dashboard.
APNIC Update, presented at NZNOG 2025 by Terry SweetserAPNIC
Terry Sweetser, Training Delivery Manager (South Asia & Oceania) at APNIC presented an APNIC update at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Understanding the Tor Network and Exploring the Deep Webnabilajabin35
While the Tor network, Dark Web, and Deep Web can seem mysterious and daunting, they are simply parts of the internet that prioritize privacy and anonymity. Using tools like Ahmia and onionland search, users can explore these hidden spaces responsibly and securely. It’s essential to understand the technology behind these networks, as well as the risks involved, to navigate them safely. Visit https://torgol.com/
Best web hosting Vancouver 2025 for you businesssteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Reliable Vancouver Web Hosting with Local Servers & 24/7 Supportsteve198109
Looking for powerful and affordable web hosting in Vancouver? 4GoodHosting offers premium Canadian web hosting solutions designed specifically for individuals, startups, and businesses across British Columbia. With local data centers in Vancouver and Toronto, we ensure blazing-fast website speeds, superior uptime, and enhanced data privacy—all critical for your business success in today’s competitive digital landscape.
Our Vancouver web hosting plans are packed with value—starting as low as $2.95/month—and include secure cPanel management, free domain transfer, one-click WordPress installs, and robust email support with anti-spam protection. Whether you're hosting a personal blog, business website, or eCommerce store, our scalable cloud hosting packages are built to grow with you.
Enjoy enterprise-grade features like daily backups, DDoS protection, free SSL certificates, and unlimited bandwidth on select plans. Plus, our expert Canadian support team is available 24/7 to help you every step of the way.
At 4GoodHosting, we understand the needs of local Vancouver businesses. That’s why we focus on speed, security, and service—all hosted on Canadian soil. Start your online journey today with a reliable hosting partner trusted by thousands across Canada.
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Joyce Chen, Senior Advisor, Strategic Engagement at APNIC, presented on 'APNIC Policy Development Process' at the Local APIGA Taiwan 2025 event held in Taipei from 19 to 20 April 2025.
Smart Mobile App Pitch Deck丨AI Travel App Presentation Templateyojeari421237
🚀 Smart Mobile App Pitch Deck – "Trip-A" | AI Travel App Presentation Template
This professional, visually engaging pitch deck is designed specifically for developers, startups, and tech students looking to present a smart travel mobile app concept with impact.
Whether you're building an AI-powered travel planner or showcasing a class project, Trip-A gives you the edge to impress investors, professors, or clients. Every slide is cleanly structured, fully editable, and tailored to highlight key aspects of a mobile travel app powered by artificial intelligence and real-time data.
💼 What’s Inside:
- Cover slide with sleek app UI preview
- AI/ML module implementation breakdown
- Key travel market trends analysis
- Competitor comparison slide
- Evaluation challenges & solutions
- Real-time data training model (AI/ML)
- “Live Demo” call-to-action slide
🎨 Why You'll Love It:
- Professional, modern layout with mobile app mockups
- Ideal for pitches, hackathons, university presentations, or MVP launches
- Easily customizable in PowerPoint or Google Slides
- High-resolution visuals and smooth gradients
📦 Format:
- PPTX / Google Slides compatible
- 16:9 widescreen
- Fully editable text, charts, and visuals
DNS Resolvers and Nameservers (in New Zealand)APNIC
Geoff Huston, Chief Scientist at APNIC, presented on 'DNS Resolvers and Nameservers in New Zealand' at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Perguntas dos animais - Slides ilustrados de múltipla escolhasocaslev
Training Slides: 153 - Working with the CLI
1. The MySQL Availability Company
Tungsten Replicator Master Class
Basics: Working with Command Line Tools
Chris Parker, Customer Success Director, EMEA & APAC
2. Topics
In this short course, we will
• Re-cap the previous Installation
• Explore the main Command Line Tools
• tpm
• trepctl
• thl
6. tpm
• Tungsten Package Manager
• As well as using tpm for installs and updates, it can also be used for a number of other actions. You
can issue tpm help for a list of possible options.
• Most common options:
• tools/tpm validate[-update]
• [tools/]tpm update [--replace-release]
• tools/tpm install
• tpm diag – Gathers package of stats for support!
• tpm mysql – Launches the MySQL command-line client and connects to the MySQL server
process running on the local host
7. trepctl
• Used to control and manage the replicator Java process
• Most common uses are
• View replicator status
• Stop/Start replication
• Skip “safe” errors
• trepctl help to see all options
8. trepctl
• trepctl services
• Short list output of all services running on the host
• Shows basic information
• trepctl [-service SERVICENAME] status [-r N]
• Shows the full status of the replicator
• Specify –service if multiple services available
• Specify –r N to refresh every N seconds or until CTRL+C
• trepctl [-service SERVICENAME] status -name stages
• A more complete status view showing detailed output of each replicator stage
9. trepctl
• trepctl [-service SERVICENAME] qs [-r N]
• Shows a quick summary of the replicator progress
• Specify –service if multiple services available
• Specify –r N to refresh every N seconds or until CTRL+C
• trepctl [-service SERVICENAME] perf [-r N]
• Shows the status of each stage of the replication pipeline
• Output differs between Primary and Replicas
• Specify –service if there are multiple services available
• Specify –r N to refresh every N seconds or until CTRL+C
10. trepctl
• trepctl [-service SERVICENAME] reset {OPTIONS}
• Performs a FULL reset of the replicator
• VERY destructive if used incorrectly
• Resets SEQNO to 0
• trepctl [-service SERVICENAME] offline|online {OPTIONS}
• Bring a service online or offline
• Can be used with various options to control how/when
• Used with –skip-seqno to skip errors
12. trepctl status
appliedLastEventId : mysql-bin.000005:0000000051631947;-1
appliedLastSeqno : 166764
appliedLatency : 0.769
autoRecoveryEnabled : false
autoRecoveryTotal : 0
channels : 1
clusterName : alpha
currentEventId : mysql-bin.000005:0000000051631947
currentTimeMillis : 1578578135591
dataServerHost : trainingdb1
extensions :
host : trainingdb1
latestEpochNumber : 9
masterConnectUri : thl://localhost:/
masterListenUri : thl:// trainingdb1 :2112/
On a Primary, the last ending
binary log position written to the
THL along with the Seqno for that
event, and the latency between
the database commit to the
binlog and the THL write
completion.
On a replica, displays the last
event written to the target
database with the corresponding
Seqno, and the latency between
the source database commit and
the completed apply of that
event to the target database.
17. trepctl status
maximumStoredSeqNo : 166764
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : /var/lib/mysql
relativeLatency : 580.591
resourceJdbcDriver : org.drizzle.jdbc.DrizzleDriver
resourceJdbcUrl : jdbc:mysql:thin:// trainingdb1:13306 /${DBNAME}. . .
resourcePrecedence : 99
resourceVendor : mysql
rmiPort : 10000
When the Replicator goes into an
OFFLINE:ERROR state, these
fields will show all the associated
information. Always check the
trepsvc.log file for more
detail as needed.
18. trepctl status
maximumStoredSeqNo : 166764
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : /var/lib/mysql
relativeLatency : 580.591
resourceJdbcDriver : org.drizzle.jdbc.DrizzleDriver
resourceJdbcUrl : jdbc:mysql:thin://trainingdb1:13306/${DBNAME}. . .
resourcePrecedence : 99
resourceVendor : mysql
rmiPort : 10000
The current source of THL. A Primary
will show the binary log directory, a
Replica will match the masterListenURI
from the extractor.
19. trepctl status
maximumStoredSeqNo : 166764
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : /var/lib/mysql
relativeLatency : 580.591
resourceJdbcDriver : org.drizzle.jdbc.DrizzleDriver
resourceJdbcUrl : jdbc:mysql:thin://trainingdb1:13306/${DBNAME}. . .
resourcePrecedence : 99
resourceVendor : mysql
rmiPort : 10000
Latency between NOW and the
timestamp of the last event in the
local THL.
20. trepctl status
role : master
seqnoType : java.lang.Long
serviceName : alpha
serviceType : local
simpleServiceName : alpha
siteName : default
sourceId : trainingdb1
state : ONLINE
timeInStateSeconds : 85641.738
timezone : GMT
transitioningTo :
uptimeSeconds : 85673.511
useSSLConnection : false
version : Tungsten Clustering 6.1.4 build 44
21. trepctl status
role : master
seqnoType : java.lang.Long
serviceName : training1
serviceType : local
simpleServiceName : training1
siteName : default
sourceId : trainingdb1
state : ONLINE
timeInStateSeconds : 85641.738
timezone : GMT
transitioningTo :
uptimeSeconds : 85673.511
useSSLConnection : false
version : Tungsten Clustering 6.1.4 build 44
Current role : master, slave
Current State, can be :
• ONLINE
• ONLINE:DEGRADED
• ONLINE:DEGRADED-BINLOG-FULLY-READ
• OFFLINE:NORMAL
• SUSPECT
• OFFLINE:ERROR
• GOING-ONLINE:SYNCHRONISING
• GOING-ONLINE:RESTORING
• GOING-ONLINE:PROVISIONING
22. Applied Latency vs Relative Latency
The appliedLatency is the latency between the commit
time of the source event and the time the last committed
transaction reached the end of the corresponding
pipeline within the replicator.
Within a primary, this indicates the latency between the
transaction commit time and when it was written to the
THL.
In a replica, it indicates the latency between the commit
time on the primary database and when the transaction
has been committed to the destination database.
Clocks must be synchronized across hosts for this
information to be accurate. The latency is measured in
seconds.
Increasing latency may indicate that the destination
database is unable to keep up with the transactions from
the primary. In replicators that are operating with parallel
apply, appliedLatency indicates the latency of the trailing
channel. Because the parallel apply mechanism does not
update all channels simultaneously, the figure shown may
trail significantly from the actual latency.
The relativeLatency is the latency between now and
timestamp of the last event written into the local THL.
This information gives an indication of how fresh the incoming
THL information is.
On a primary, it indicates whether the primary is keeping up
with transactions generated on the primary database.
On a replica, it indicates how up to date the THL read from the
extractor is.
A large value can either indicate that the database is not busy,
that a large transaction is currently being read from the source
database or from the primary replicator, or that the replicator
has stalled for some reason.
An increasing relativeLatency on the replica may indicate that
the replicator may have stalled and stopped applying changes
to the database.
23. thl
• Interface for viewing the contents of the THL
• thl help to view all command options
• thl info – Show a summary of the THL available on disk
• thl list will product a lot of output, always use with options to filter the result set
• -low|from SEQ – Start from supplied seqno
• -high|to SEQ – Stop at supplied seqno
• -first – Show first seqno available
• -first N – Show first N entries
• -last – Show last seqno available
• -last N – Show last N entries
• thl index – re-index THL – can help to speed up replicator restarts
• thl purge – Use with CARE since this command will REMOVE ALL THL on disk for that service