Cacti is an open source software that uses RRDTool to graph and store time-series data from data sources like SNMP. It stores data in a MySQL database and uses PHP to provide a frontend interface for creating graphs, templates, and managing users. Cacti supports unlimited graph items, auto-padding, custom data gathering scripts, and SNMP to monitor network traffic and system metrics over time through graphs. It also provides features like data source templates, host templates, and user management to scale monitoring of large networks.
Build cloud native solution using open source Nitesh Jadhav
Build cloud native solution using open source. I have tried to give a high level overview on How to build Cloud Native using CNCF graduated software's which are tested, proven and having many reference case studies and partner support for deployment
This is a talk I gave to the late crew at the DevOps KC meetup outlining why/what/how of setting up a Graphite server using Python end-to-end for getting stats.
This document describes the installation and configuration of a network intrusion detection system using Snort and ACID. It outlines the software components used including Snort, ACID, MySQL, PHP, IIS and WinPcap. It then details the process of setting up the test network, installing each component, configuring Snort and ACID settings, and testing the system by generating traffic and viewing alerts.
Prometheus - Intro, CNCF, TSDB,PromQL,GrafanaSridhar Kumar N
https://www.youtube.com/playlist?list=PLAiEy9H6ItrKC5PbH7KiELiSEIKv3tuov
-What is Prometheus?
-Difference Between Nagios vs Prometheus
-Architecture
-Alertmanager
-Time series DB
-PromQL (Prometheus Query Language)
-Live Demo
-Grafana
Kubernetes relies on API calls and standard Linux tools for troubleshooting. Monitoring metrics like CPU and memory usage is essential using tools like Prometheus. Logging activity across nodes is collected using Fluentd to help identify issues. Basic troubleshooting includes checking Pod logs and states, APIs to controllers, and ensuring sufficient resources.
Network monitoring and management involves three key activities:
1. Monitoring networks to diagnose problems and gather statistics using tools that monitor components and notify administrators of outages or issues.
2. Managing networks by ensuring security, optimizing performance, maintaining reliability, and addressing faults and problems using systems, services, fault, change and performance monitoring.
3. Consolidating and analyzing monitoring data in a network operations center to coordinate tasks, field incidents, and improve the network over time based on trends.
The document provides instructions for a lab on Snort and firewall rules. It describes:
1) Setting up the virtual environment and configuring networking on the CyberOps Workstation VM.
2) Explaining the differences between firewall and IDS rules while noting their similarities, such as both having matching and action components.
3) Having students run commands to start a malware server, use Snort to monitor traffic, and download a file from the server to trigger an alert, observing the alert in the Snort log.
A gateway server is a server through which the computers in a LAN access the Internet. This is
usually done through NAT. It should also provide firewall protection for the LAN and it can also serve
as a DNS and DHCPD server for the LAN. Some years ago I have been involved in a project for building gateway servers like this, using
slackware on old PCs. In this article I will try to explain the things that I have done on this project and
how I did them.
This document provides guidance on setting up system monitoring using Prometheus, Node Exporter, and Grafana. It discusses why system monitoring is important for maintaining stability and catching issues early. Prometheus is an open-source monitoring tool that collects and stores metrics. Node Exporter exposes system metrics to Prometheus. Grafana is used to create visualizations and dashboards from Prometheus data. The document outlines installing and configuring these tools, including configuring Prometheus to scrape Node Exporter and setting up Grafana.
WIRESHARK is a free and open-source packet analyzer that allows users to examine network traffic and inspect packets. It can be used for basic network troubleshooting, analysis, development, and education. The tool supports live packet captures from networked interfaces as well as offline analysis of captured packet data files. It decodes hundreds of protocols and can filter traffic based on various packet attributes.
Laporan Praktikum Keamanan Siber - Tugas 4 -Kelas C - Kelompok 3.pdfIGedeArieYogantaraSu
The document provides instructions for a lab activity involving network connectivity and packet analysis tools. Students will use the ping and traceroute tools to verify connectivity and trace routes to remote servers. They will also use Wireshark to capture and analyze ICMP data packets between two hosts in a simulated Mininet topology. The objectives are to familiarize students with basic network diagnostics using common network utilities and gain experience using Wireshark to observe network traffic at the packet level.
Cloud init and cloud provisioning [openstack summit vancouver]Joshua Harlow
Evil Superuser's HOWTO: Launching instances to do your bidding.
You click 'run' on the OpenStack dashboard, or launch a new instance via the api. Some provisioning magic happens and soon you've got a server created especially for you. Did you ever wonder what magic happens to a standard image on boot? Have you wanted to launch instances and have them into your infrastructure with no manual interaction? Cloud-init is software that runs in most linux instances. It can take your input and do your bidding. Learn what things cloud-init magically does for you and how you can make it do more. Also, take advantage of the after-talk to pester cloud-init developers on what is missing or throw rotten fruits in their direction.
Presentation to the staff of the Pacific Climate Impacts Consortium on 2014/02/18 about its Computational Support Group's work on version 2.0 of the PCIC Data Portal.
Docker allows creating isolated environments called containers from images. Containers provide a standard way to develop, ship, and run applications. The document discusses how Docker can be used for scientific computing including running different versions of software, automating computations, sharing research environments and results, and providing isolated development environments for users through Docker IaaS tools. K-scope is a code analysis tool that previously required complex installation of its Omni XMP dependency, but could now be run as a containerized application to simplify deployment.
The document describes a proposed grid computing framework that aims to make grid computing easier to deploy, use, and maintain. The framework would accept computational problems from users, distribute tasks to client machines based on dependencies and load balancing, collect and compile results from clients, and present outputs to the user. The framework is intended to address concerns with existing grid middleware being complicated and not accessible to all, and will be open source, Linux-based, and work on a moderately sized local area network.
An Express Guide ~ Cacti for IT Infrastructure Monitoring & GraphingAbhishek Kumar
It's an Express Guide to "Setup of Cacti Server with purpose of IT Infrastructure Monitoring & Service Graphs" ~~~~~ its aimed at monitoring of various IT services and brilliant graphing of statistics
Rapid Miner is an open-source data mining software tool. It provides functionality for data loading, preprocessing, transformation, data mining, modeling, evaluation, and deployment. Rapid Miner uses learning schemes and attribute evaluators from Weka and statistical modeling schemes from R. It can be used for tasks like text mining, feature engineering, and distributed data mining. Rapid Miner includes a graphical user interface to design analytical workflows using operators. It can also be called as an API or from the command line.
The document summarizes lessons learned from deploying and operating the SAM-Grid distributed computing infrastructure for high energy physics experiments. Key challenges included system configuration issues, limitations of standard grid job managers in interfacing with diverse cluster systems, and ensuring scalability and network accessibility of central grid services. Overcoming these problems required flexible abstractions of underlying batch systems, a customized grid-fabric interface, and service architectures to aggregate access loads. Careful planning with system administrators was also important for smooth operations.
This document provides an overview of setting up monitoring for MySQL and MongoDB servers using Prometheus and Grafana. It discusses installing and configuring Prometheus, Grafana, exporters for collecting metrics from MySQL, MongoDB and systems, and dashboards for visualizing the metrics in Grafana. The tutorial hands-on sets up Prometheus and Grafana in two virtual machines to monitor a MySQL master-slave replication setup and MongoDB cluster.
- The presentation introduces WS-VLAM, a workflow management system that aims to enable end-users to define, execute, and monitor e-science applications in a location-independent way.
- WS-VLAM adopts a service-oriented approach, implementing the workflow engine and repository as WSRF services using Globus Toolkit 4. This allows for interoperability with other workflow systems.
- Current work involves testing on rapid prototyping environments and planned integration with Taverna and Kepler to allow executing predefined VLAM workflows from within those systems.
Developing Real-Time Data Pipelines with Apache KafkaJoe Stein
Apache Kafka is a distributed streaming platform that allows for building real-time data pipelines and streaming apps. It provides a publish-subscribe messaging system with persistence that allows for building real-time streaming applications. Producers publish data to topics which are divided into partitions. Consumers subscribe to topics and process the streaming data. The system handles scaling and data distribution to allow for high throughput and fault tolerance.
This document provides information about installing and configuring Linux, Apache web server, PostgreSQL database, and Apache Tomcat on a Linux system. It discusses installing Ubuntu using VirtualBox, creating users and groups, setting file permissions, important Linux files and directories. It also covers configuring Apache server and Tomcat, installing and configuring PostgreSQL, and some self-study questions about the Linux boot process, run levels, finding the kernel version and learning about NIS, NFS, and RPM package management.
- The document describes installing Oracle Real Application Clusters (RAC) and Cluster Ready Services (CRS) on a two-node Windows cluster.
- It involves a two phase installation - first installing and configuring CRS, then installing the Oracle Database with RAC.
- Key steps include configuring shared disks and partitions for the Oracle Cluster Registry, voting disk, and Automatic Storage Management; installing and configuring CRS; and then installing Oracle Database with RAC.
The document provides instructions for a lab on Snort and firewall rules. It describes:
1) Setting up the virtual environment and configuring networking on the CyberOps Workstation VM.
2) Explaining the differences between firewall and IDS rules while noting their similarities, such as both having matching and action components.
3) Having students run commands to start a malware server, use Snort to monitor traffic, and download a file from the server to trigger an alert, observing the alert in the Snort log.
A gateway server is a server through which the computers in a LAN access the Internet. This is
usually done through NAT. It should also provide firewall protection for the LAN and it can also serve
as a DNS and DHCPD server for the LAN. Some years ago I have been involved in a project for building gateway servers like this, using
slackware on old PCs. In this article I will try to explain the things that I have done on this project and
how I did them.
This document provides guidance on setting up system monitoring using Prometheus, Node Exporter, and Grafana. It discusses why system monitoring is important for maintaining stability and catching issues early. Prometheus is an open-source monitoring tool that collects and stores metrics. Node Exporter exposes system metrics to Prometheus. Grafana is used to create visualizations and dashboards from Prometheus data. The document outlines installing and configuring these tools, including configuring Prometheus to scrape Node Exporter and setting up Grafana.
WIRESHARK is a free and open-source packet analyzer that allows users to examine network traffic and inspect packets. It can be used for basic network troubleshooting, analysis, development, and education. The tool supports live packet captures from networked interfaces as well as offline analysis of captured packet data files. It decodes hundreds of protocols and can filter traffic based on various packet attributes.
Laporan Praktikum Keamanan Siber - Tugas 4 -Kelas C - Kelompok 3.pdfIGedeArieYogantaraSu
The document provides instructions for a lab activity involving network connectivity and packet analysis tools. Students will use the ping and traceroute tools to verify connectivity and trace routes to remote servers. They will also use Wireshark to capture and analyze ICMP data packets between two hosts in a simulated Mininet topology. The objectives are to familiarize students with basic network diagnostics using common network utilities and gain experience using Wireshark to observe network traffic at the packet level.
Cloud init and cloud provisioning [openstack summit vancouver]Joshua Harlow
Evil Superuser's HOWTO: Launching instances to do your bidding.
You click 'run' on the OpenStack dashboard, or launch a new instance via the api. Some provisioning magic happens and soon you've got a server created especially for you. Did you ever wonder what magic happens to a standard image on boot? Have you wanted to launch instances and have them into your infrastructure with no manual interaction? Cloud-init is software that runs in most linux instances. It can take your input and do your bidding. Learn what things cloud-init magically does for you and how you can make it do more. Also, take advantage of the after-talk to pester cloud-init developers on what is missing or throw rotten fruits in their direction.
Presentation to the staff of the Pacific Climate Impacts Consortium on 2014/02/18 about its Computational Support Group's work on version 2.0 of the PCIC Data Portal.
Docker allows creating isolated environments called containers from images. Containers provide a standard way to develop, ship, and run applications. The document discusses how Docker can be used for scientific computing including running different versions of software, automating computations, sharing research environments and results, and providing isolated development environments for users through Docker IaaS tools. K-scope is a code analysis tool that previously required complex installation of its Omni XMP dependency, but could now be run as a containerized application to simplify deployment.
The document describes a proposed grid computing framework that aims to make grid computing easier to deploy, use, and maintain. The framework would accept computational problems from users, distribute tasks to client machines based on dependencies and load balancing, collect and compile results from clients, and present outputs to the user. The framework is intended to address concerns with existing grid middleware being complicated and not accessible to all, and will be open source, Linux-based, and work on a moderately sized local area network.
An Express Guide ~ Cacti for IT Infrastructure Monitoring & GraphingAbhishek Kumar
It's an Express Guide to "Setup of Cacti Server with purpose of IT Infrastructure Monitoring & Service Graphs" ~~~~~ its aimed at monitoring of various IT services and brilliant graphing of statistics
Rapid Miner is an open-source data mining software tool. It provides functionality for data loading, preprocessing, transformation, data mining, modeling, evaluation, and deployment. Rapid Miner uses learning schemes and attribute evaluators from Weka and statistical modeling schemes from R. It can be used for tasks like text mining, feature engineering, and distributed data mining. Rapid Miner includes a graphical user interface to design analytical workflows using operators. It can also be called as an API or from the command line.
The document summarizes lessons learned from deploying and operating the SAM-Grid distributed computing infrastructure for high energy physics experiments. Key challenges included system configuration issues, limitations of standard grid job managers in interfacing with diverse cluster systems, and ensuring scalability and network accessibility of central grid services. Overcoming these problems required flexible abstractions of underlying batch systems, a customized grid-fabric interface, and service architectures to aggregate access loads. Careful planning with system administrators was also important for smooth operations.
This document provides an overview of setting up monitoring for MySQL and MongoDB servers using Prometheus and Grafana. It discusses installing and configuring Prometheus, Grafana, exporters for collecting metrics from MySQL, MongoDB and systems, and dashboards for visualizing the metrics in Grafana. The tutorial hands-on sets up Prometheus and Grafana in two virtual machines to monitor a MySQL master-slave replication setup and MongoDB cluster.
- The presentation introduces WS-VLAM, a workflow management system that aims to enable end-users to define, execute, and monitor e-science applications in a location-independent way.
- WS-VLAM adopts a service-oriented approach, implementing the workflow engine and repository as WSRF services using Globus Toolkit 4. This allows for interoperability with other workflow systems.
- Current work involves testing on rapid prototyping environments and planned integration with Taverna and Kepler to allow executing predefined VLAM workflows from within those systems.
Developing Real-Time Data Pipelines with Apache KafkaJoe Stein
Apache Kafka is a distributed streaming platform that allows for building real-time data pipelines and streaming apps. It provides a publish-subscribe messaging system with persistence that allows for building real-time streaming applications. Producers publish data to topics which are divided into partitions. Consumers subscribe to topics and process the streaming data. The system handles scaling and data distribution to allow for high throughput and fault tolerance.
This document provides information about installing and configuring Linux, Apache web server, PostgreSQL database, and Apache Tomcat on a Linux system. It discusses installing Ubuntu using VirtualBox, creating users and groups, setting file permissions, important Linux files and directories. It also covers configuring Apache server and Tomcat, installing and configuring PostgreSQL, and some self-study questions about the Linux boot process, run levels, finding the kernel version and learning about NIS, NFS, and RPM package management.
- The document describes installing Oracle Real Application Clusters (RAC) and Cluster Ready Services (CRS) on a two-node Windows cluster.
- It involves a two phase installation - first installing and configuring CRS, then installing the Oracle Database with RAC.
- Key steps include configuring shared disks and partitions for the Oracle Cluster Registry, voting disk, and Automatic Storage Management; installing and configuring CRS; and then installing Oracle Database with RAC.
This document provides guidance on setting up a Wireless ISP (WISP) network. It discusses choosing hardware like routers and antennas for highsite towers and clients. It also covers setting up the software configuration on the highsite for authentication, IP addressing, routing, and security. Key steps include planning the IP scheme, using PPPOE or manual IP for clients, setting up firewalls, wireless security, and ensuring proper hardware and power backups. The document also provides a basic checklist for configuring client routers to connect to the network.
This document provides a summary of a lecture on the data link layer. It discusses how shared broadcast mediums require protocols for nodes to share the channel to avoid collisions. It introduces the CSMA/CD protocol used in early Ethernet networks, which uses carrier sensing and collision detection to allow multiple nodes to transmit over a shared broadcast medium in a distributed manner. It also discusses limitations of CSMA/CD and how modern Ethernet networks evolved to use switching to create point-to-point links between nodes rather than a shared broadcast medium.
The document provides instructions for setting up load sharing between two internet connections using a MikroTik router. Key points include:
- Two internet connections will be connected to the MikroTik router and traffic will be shared between the connections rather than balanced.
- Routing tables will be configured for each connection and policy-based routing will be used to direct certain traffic like Google searches over one connection and Facebook over the other.
- Address lists will be downloaded and used to classify traffic and mark it for routing to the appropriate table based on the defined policies. This will allow automated failover and simultaneous use of both connections.
The document describes a training course on GPON fundamentals. The course covers an overview of optical access networks including different scenarios like FTTH, FTTB, and FTTC. It then discusses the basic concepts of PON including the components of PON networks and the principle of GPON which uses WDM for bi-directional communication over a single fiber with downstream broadcast and upstream TDMA. The document also outlines the contents and objectives of the GPON training course.
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Joyce Chen, Senior Advisor, Strategic Engagement at APNIC, presented on 'APNIC Policy Development Process' at the Local APIGA Taiwan 2025 event held in Taipei from 19 to 20 April 2025.
Reliable Vancouver Web Hosting with Local Servers & 24/7 Supportsteve198109
Looking for powerful and affordable web hosting in Vancouver? 4GoodHosting offers premium Canadian web hosting solutions designed specifically for individuals, startups, and businesses across British Columbia. With local data centers in Vancouver and Toronto, we ensure blazing-fast website speeds, superior uptime, and enhanced data privacy—all critical for your business success in today’s competitive digital landscape.
Our Vancouver web hosting plans are packed with value—starting as low as $2.95/month—and include secure cPanel management, free domain transfer, one-click WordPress installs, and robust email support with anti-spam protection. Whether you're hosting a personal blog, business website, or eCommerce store, our scalable cloud hosting packages are built to grow with you.
Enjoy enterprise-grade features like daily backups, DDoS protection, free SSL certificates, and unlimited bandwidth on select plans. Plus, our expert Canadian support team is available 24/7 to help you every step of the way.
At 4GoodHosting, we understand the needs of local Vancouver businesses. That’s why we focus on speed, security, and service—all hosted on Canadian soil. Start your online journey today with a reliable hosting partner trusted by thousands across Canada.
Smart Mobile App Pitch Deck丨AI Travel App Presentation Templateyojeari421237
🚀 Smart Mobile App Pitch Deck – "Trip-A" | AI Travel App Presentation Template
This professional, visually engaging pitch deck is designed specifically for developers, startups, and tech students looking to present a smart travel mobile app concept with impact.
Whether you're building an AI-powered travel planner or showcasing a class project, Trip-A gives you the edge to impress investors, professors, or clients. Every slide is cleanly structured, fully editable, and tailored to highlight key aspects of a mobile travel app powered by artificial intelligence and real-time data.
💼 What’s Inside:
- Cover slide with sleek app UI preview
- AI/ML module implementation breakdown
- Key travel market trends analysis
- Competitor comparison slide
- Evaluation challenges & solutions
- Real-time data training model (AI/ML)
- “Live Demo” call-to-action slide
🎨 Why You'll Love It:
- Professional, modern layout with mobile app mockups
- Ideal for pitches, hackathons, university presentations, or MVP launches
- Easily customizable in PowerPoint or Google Slides
- High-resolution visuals and smooth gradients
📦 Format:
- PPTX / Google Slides compatible
- 16:9 widescreen
- Fully editable text, charts, and visuals
Best web hosting Vancouver 2025 for you businesssteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Understanding the Tor Network and Exploring the Deep Webnabilajabin35
While the Tor network, Dark Web, and Deep Web can seem mysterious and daunting, they are simply parts of the internet that prioritize privacy and anonymity. Using tools like Ahmia and onionland search, users can explore these hidden spaces responsibly and securely. It’s essential to understand the technology behind these networks, as well as the risks involved, to navigate them safely. Visit https://torgol.com/
APNIC Update, presented at NZNOG 2025 by Terry SweetserAPNIC
Terry Sweetser, Training Delivery Manager (South Asia & Oceania) at APNIC presented an APNIC update at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
DNS Resolvers and Nameservers (in New Zealand)APNIC
Geoff Huston, Chief Scientist at APNIC, presented on 'DNS Resolvers and Nameservers in New Zealand' at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Ad
Cacti Network Monitoring Networking Tools
1. Network Monitoring & Management
Using Cacti
Network Startup Resource Center
www.nsrc.org
These materials are licensed under the Creative Commons Attribution-NonCommercial 4.0 International license
(http://creativecommons.org/licenses/by-nc/4.0/)
3. Cacti: Uses RRDtool, PHP and stores
data in MySQL. It supports the use
of SNMP and graphics with RRDtool.
“Cacti is a complete frontend to RRDTool, it stores all of the necessary
information to create graphs and populate them with data in a MySQL
database. The frontend is completely PHP driven. Along with being able to
maintain Graphs, Data Sources, and Round Robin Archives in a database,
cacti handles the data gathering. There is also SNMP support for those
used to creating traffic graphs with MRTG.”
Introduction
4. • A tool to monitor, store and present
network and system/server statistics
• Designed around RRDTool with a special
emphasis on the graphical interface
• Almost all of Cacti's functionality can be
configured via the Web.
• You can find Cacti here:
http://www.cacti.net/
Introduction
5. • Round Robin Database for time series data storage
• Command line based
• From the author of MRTG
• Made to be faster and more flexible
• Includes CGI and Graphing tools, plus APIs
• Solves the Historical Trends and Simple Interface problems as
well as storage issues
Find RRDtool here: http://oss.oetiker.ch/rrdtool/
Getting RRDtool
7. 1. Cacti is written as a group of PHP scripts.
2. The key script is “poller.php”, which runs every 5 minutes
(by default). It resides in /usr/share/cacti/site.
3. To work poller.php needs to be in /etc/cron.d/cacti like this:
MAILTO=root
*/5 * * * * www-data php /usr/share/cacti/site/poller.php >/dev/null 2>/var/log/cacti/poller-error.log
4. Cacti uses RRDtool to create graphs for each device and
data that is collected about that device. You can adjust all
of this from within the Cacti web interface.
5. The RRD files are located in /var/lib/cacti/rra when cacti is
installed from packages.
General Description
8. You can measure Availability, Load, Errors and more all
with history.
– Cacti can display your router and switch interfaces and their traffic,
including all error traffic as well.
– Cacti can measure drive capacity, CPU load (network h/w and servers) and
much more. It can react to conditions and send notifications based on
specified ranges.
Graphics
– Allows you to use all the functionality of rrdgraph to define graphics and
automate how they are displayed.
– Allows you to organize information in hierarchical tree structures.
Data Sources
– Permits you to utilize all the functions of rrdcreate and rrdupdate including
defining several sources of information for each RRD file.
Advantages
9. Data Collection
– Supports SNMP including the use of php-snmp or net-snmp
– Update data sources via SNMP or define scripts to capture required data
– cactid implements SNMP routines in C with multi-threading
Templates
– Create templates to reutilize graphics definitions, data and device sources
Cacti Plugin Architecture
– Extends Cacti functionality. Many, many plugins are available. Part of the
default Cacti installation in Ubuntu version 12 and above.
User Management
– Manage users locally or via LDAP
– Assign granular levels of authorization by user or groups of users.
Advantages (continued)
10. • Configuring Interfaces via the web interface is tedious
• Use provided command-line scripts instead
• Upgrading versions difficult if installed from source.
Advice:
For continuous use or large installations it is likely that
you will be using scripts and tools to automate the
configuration of Cacti.
Disadvantages
11. PART II
Before we install Cacti we demonstrate how to use the
interface to add and monitor some devices…
Steps to Add and Monitor Devices
12. Management -> Devices -> Add
Specify device attributes
– We’ll add an entry for our gateway router, gw.ws.nsrc.org*
*Actual device name may be different.
Adding a Device via the Web Interface
14. • Host Template: ucd/net SNMP Host is recommended for
servers to include disk definitions.
• Choose SNMP version 2 for this workshop.
• For “Downed Device Detection” we recommend either
using Ping and SNMP, or just Ping.
• Use “NetManage” for the “SNMP Community” string.
SNMP access is a security issue:
- Version 2 is not encrypted
- Watch out for globally readable “public” communities
- Be careful about who can access r/w communities.
- Replace “xxxxxxx” with your local public r/o string
Add Devices (3)
15. For a router you may see a lot of potential network
interfaces that are detected by SNMP.
Your decision is to create graphs for all of these
are not. Generally the answer is, “Yes” – Why?
Add Devices (4)
16. • Chose the “Create graphs for this host”
• Under Graph Templates generally check the
top box that chooses all the available graphs to
be displayed.
• Press Create.
• You can change the default colors, but the
predefined definitions generally work well.
Create Graphics
19. You’ll see this screen later when you are creating graphics for hosts vs. routers
Create Graphics (4)
20. • Place the new device in its proper location in
your tree hierarchy.
• Building your display hierarchy is your decision.
Try drawing this out on paper first.
–Under Management Graph Trees
select the Default Tree hierarchy (or, create
one of your own).
View the Graphics
21. First, press “Add” if you want a new graphing tree:
Second, name your tree, choose the sorting order (the author likes
Natural Sorting and press “create”:
Graphics Tree
22. Third, add devices to your new tree:
Once you click “Add” you can add “Headers” (separators), graphs or hosts.
Now we'll add Hosts to our newly created graph tree:
Graphics Tree
23. • Our graphics tree just after the first two devices were added.
• So far, graphics are empty – the first data can
take up to 5 minutes to display.
• Cacti graphs are stored on disk and updated using RRDTool
via the poller.php script, which, by default, is run every five
minutes using cron.
Graphics Tree with Two Devices
26. • There are a number of popular Cacti plugins, such as:
- Settings
- thold
- PHP Weathermap
• A good place to start is http://cactiusers.net/ and Google.
• To send email to RT from Cacti via rt-mailgate you can
use the Cacti “settings” plugin:
http://docs.cacti.net/plugin:settings
• Automate device and graph creation using available
command-line scripts in /usr/share/cacti/cli, such as:
- add_devices.php
- add_graphs.php
- add_tree.php
Next Steps
27. • Cacti is very flexible due to its use of templates.
• Once you understand the concepts behind RRDTool,
then how Cacti works should be (more or less) intuitive.
• The visualization hierarchy of devices helps to organize
and locate new devices quickly.
• It is not easy to do a rediscover of devices.
• To add lots of devices requires automation. Software
such as Netdot, Netdisco, IPPlan, TIPP can help – as
well as local scripts that update the Cacti back-end
MySQL database directly.
Conclusions