ClickHouse Monitoring 101: What to monitor and howAltinity Ltd
Webinar. Presented by Robert Hodges and Ned McClain, April 1, 2020
You are about to deploy ClickHouse into production. Congratulations! But what about monitoring? In this webinar we will introduce how to track the health of individual ClickHouse nodes as well as clusters. We'll describe available monitoring data, how to collect and store measurements, and graphical display using Grafana. We'll demo techniques and share sample Grafana dashboards that you can use for your own clusters.
Slides of our CoNEXT'19 presentation of "RSS++: load and state-aware receive side scaling", a technique to insure a good load-balancing among multiple cores of a server for networking application.
This document discusses disaggregating Ceph storage using NVMe over Fabrics (NVMeoF). It motivates using NVMeoF by showing the performance limitations of directly attaching multiple NVMe drives to individual compute nodes. It then proposes a design to leverage the full resources of a cluster by distributing NVMe drives across dedicated storage nodes and connecting them to compute nodes over a high performance fabric using NVMeoF and RDMA. Some initial Ceph performance measurements using this model show improved IOPS and latency compared to the direct attached approach. Future work could explore using SPDK and Linux kernel improvements to further optimize performance.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
The document provides an overview of several data center network architectures: Monsoon, VL2, SEATTLE, PortLand, and TRILL. Monsoon proposes a large layer 2 domain with a Clos topology and uses MAC-in-MAC encapsulation and load balancing to improve scalability. VL2 also uses a Clos topology with flat addressing, load balancing, and an end host directory for address resolution. SEATTLE employs flat addressing, automated host discovery, and hash-based address resolution. PortLand uses a tree topology with encoded switch positions and a fabric manager for address mapping. TRILL standardizes encapsulation and IS-IS routing between routing bridges.
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
This document discusses Dell's support for CEPH storage solutions and provides an agenda for a CEPH Day event at Dell. Key points include:
- Dell is a certified reseller of Red Hat-Inktank CEPH support, services, and training.
- The agenda covers why Dell supports CEPH, hardware recommendations, best practices shared with CEPH colleagues, and a concept for research data storage that is seeking input.
- Recommended CEPH architectures, components, configurations, and considerations are discussed for planning and implementing a CEPH solution. Dell server hardware options that could be used are also presented.
This document provides troubleshooting guidance for issues with Ceph. It begins by suggesting identifying the problem domain as either performance, hang, crash, or unexpected behavior. For each problem, it recommends tools and techniques for further investigation such as debugging logs, profiling tools, and source code analysis. Debugging steps include establishing baselines, identifying implicated hosts or subsystems, increasing log verbosity, and tracing transactions through logs. The document emphasizes starting at the user end and working back towards Ceph to isolate issues.
oVirt and OpenStack look kind of similar from a distance. But they cater to different use-cases. That said, they do have some common needs. How can they work together? And when is it better to use one over the other?
YugaByte DB Internals - Storage Engine and Transactions Yugabyte
This document introduces YugaByte DB, a high-performance, distributed, transactional database. It is built to scale horizontally on commodity servers across data centers for mission-critical applications. YugaByte DB uses a transactional document store based on RocksDB, Raft-based replication for resilience, and automatic sharding and rebalancing. It supports ACID transactions across documents, provides APIs compatible with Cassandra and Redis, and is open source. The architecture is designed for high performance, strong consistency, and cloud-native deployment.
This document discusses disaggregating Ceph storage using NVMe over Fabrics (NVMeoF). It motivates using NVMeoF by showing the performance limitations of directly attaching multiple NVMe drives to individual compute nodes. It then proposes a design to leverage the full resources of a cluster by distributing NVMe drives across dedicated storage nodes and connecting them to compute nodes over a high performance fabric using NVMeoF and RDMA. Some initial Ceph performance measurements using this model show improved IOPS and latency compared to the direct attached approach. Future work could explore using SPDK and Linux kernel improvements to further optimize performance.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
The document provides an overview of several data center network architectures: Monsoon, VL2, SEATTLE, PortLand, and TRILL. Monsoon proposes a large layer 2 domain with a Clos topology and uses MAC-in-MAC encapsulation and load balancing to improve scalability. VL2 also uses a Clos topology with flat addressing, load balancing, and an end host directory for address resolution. SEATTLE employs flat addressing, automated host discovery, and hash-based address resolution. PortLand uses a tree topology with encoded switch positions and a fabric manager for address mapping. TRILL standardizes encapsulation and IS-IS routing between routing bridges.
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
This document discusses Dell's support for CEPH storage solutions and provides an agenda for a CEPH Day event at Dell. Key points include:
- Dell is a certified reseller of Red Hat-Inktank CEPH support, services, and training.
- The agenda covers why Dell supports CEPH, hardware recommendations, best practices shared with CEPH colleagues, and a concept for research data storage that is seeking input.
- Recommended CEPH architectures, components, configurations, and considerations are discussed for planning and implementing a CEPH solution. Dell server hardware options that could be used are also presented.
This document provides troubleshooting guidance for issues with Ceph. It begins by suggesting identifying the problem domain as either performance, hang, crash, or unexpected behavior. For each problem, it recommends tools and techniques for further investigation such as debugging logs, profiling tools, and source code analysis. Debugging steps include establishing baselines, identifying implicated hosts or subsystems, increasing log verbosity, and tracing transactions through logs. The document emphasizes starting at the user end and working back towards Ceph to isolate issues.
oVirt and OpenStack look kind of similar from a distance. But they cater to different use-cases. That said, they do have some common needs. How can they work together? And when is it better to use one over the other?
YugaByte DB Internals - Storage Engine and Transactions Yugabyte
This document introduces YugaByte DB, a high-performance, distributed, transactional database. It is built to scale horizontally on commodity servers across data centers for mission-critical applications. YugaByte DB uses a transactional document store based on RocksDB, Raft-based replication for resilience, and automatic sharding and rebalancing. It supports ACID transactions across documents, provides APIs compatible with Cassandra and Redis, and is open source. The architecture is designed for high performance, strong consistency, and cloud-native deployment.