0% found this document useful (0 votes)
56 views25 pages

Tools & Integrations With InfluxDB

This paper covers the various tools available to InfluxDB developers to help them work the way they want when it comes to ingesting, storing, and querying their time series data. Using the InfluxDB platform, developers build their applications with less effort, less code, and less configuration with the use of a set of powerful APIs and tools.

Uploaded by

Eric Kramer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views25 pages

Tools & Integrations With InfluxDB

This paper covers the various tools available to InfluxDB developers to help them work the way they want when it comes to ingesting, storing, and querying their time series data. Using the InfluxDB platform, developers build their applications with less effort, less code, and less configuration with the use of a set of powerful APIs and tools.

Uploaded by

Eric Kramer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Introduction

Developers are trying to gain a competitive edge by building applications that manage and analyze time
series data in order to ensure the performance and availability of every possible virtual and physical thing.
And it is no wonder because according to IDC, the amount of data produced worldwide is expected to
grow nearly fivefold by 2025 to 175 zettabytes per year, driven by the proliferation of IoT sensors,
serverless infrastructure, containerization and microservices. Most of this data is time-stamped data
generated at high frequency and in great volumes, requiring rapid ingestion and real-time querying to
extract maximum value. Relational and non-serverless databases and monitoring services struggle to
handle the scale of these large time series workloads in a cost-effective manner and don’t offer the full
stack of a database and an ecosystem of tools and services that solve the challenges of data integration,
scale, operations, and real-time analytics.

Storing the data is the easy part, but making it all work together is the challenge that developers face.
InfluxData successfully took this challenge head-on by creating InfluxDB, a purpose-built platform that
collects, stores, queries, and processes raw, high-precision, time-stamped data. Developers using the
InfluxDB platform build applications with less effort, less code, and less configuration, working in ways that
best suit their needs.

Developers can start from the UI or look under the hood to access raw code and the API. Client and server
libraries are available from Python, JavaScript, Go, and more. And they can streamline their workflow with
over 300 Telegraf plugins, integrations with Grafana, Apache Superset, or data sources like Google
Bigtable.

This paper covers the various tools available to InfluxDB developers to help them work the way they want
when it comes to ingesting, storing, and querying their time series data.

Tools and integrations


Using the InfluxDB platform, developers build their applications with less effort, less code, and less
configuration with the use of a set of powerful APIs and tools.

2 / 25
InfluxDB client libraries
InfluxDB client libraries are language-specific tools that integrate with the InfluxDB v3 API and can be used
to write data into InfluxDB as well as query the stored data. The following InfluxDB v3 client libraries are
available:
● Python
● Go
● C#
● Java
● JavaScript

The following InfluxDB v2 client libraries can write data in the InfluxDB 3.0 product suite. We are working to
update these libraries to be fully compatible with the InfluxDB 3.0 suite.
● Arduino
● Kotlin
● PHP
● Ruby
● Scala
● Swift

Check out the InfluxDB Client Library documentation.

InfluxDB Command Line Interfaces (CLIs)


The influx and influxd command line interfaces are ways to interact with and manage InfluxDB.

influx CLI
The influx CLI interacts with and manages your InfluxDB instance. Write and query data, export data,
manage organizations and users, and more.

influxd CLI
The influxd CLI starts the InfluxDB OSS server and manages the InfluxDB storage engine. You can also
restore data, rebuild the time series index (TSI), assess the health of the underlying storage engine, and
more. This daemon works with InfluxDB OSS 1.x and 2.x versions.

3 / 25
InfluxDB API
Interact with InfluxDB 3.0 using a rich API for writing and querying data and more.

Postman with the InfluxDB API


Use Postman, a popular tool for exploring APIs, to interact with the InfluxDB API.

Dashboards and InfluxDB

Grafana
Use Grafana to visualize data from your InfluxDB instance.

Apache Superset
Use Apache Superset to visualize data from an InfluxDB instance.

Telegraf
Telegraf is a plugin-driven server agent for collecting and sending metrics and events from databases,
systems, and IoT sensors. Telegraf is written in Go and compiles into a single binary with no external
dependencies, and requires a very minimal memory footprint.

The following plugin lists are extensive, but not exhaustive, as the number and breadth of Telegraf plugins
continue to grow.

Input plugins
Telegraf uses input plugins with the InfluxData time series platform to collect metrics from systems,
services, or third-party APIs.
● AMQP Consumer — The AMQP Consumer Input Plugin provides a consumer for use with AMQP
0-9-1, a prominent implementation of this protocol being RabbitMQ.
● ActiveMQ — The ActiveMQ Input Plugin gathers queues, topics, and subscriber metrics using the
ActiveMQ Console API.
● Aerospike — The Aerospike Input Plugin queries Aerospike servers and gets node statistics and
statistics for all configured namespaces.
● Amazon CloudWatch Alarms — The Amazon CloudWatch Alarms Input Plugin pulls alarm
statistics from Amazon CloudWatch.
● Amazon CloudWatch Statistics — The Amazon CloudWatch Statistics Input Plugin pulls metric

4 / 25
statistics from Amazon CloudWatch.
● Amazon ECS — Amazon ECS Input Plugin (AWS Fargate compatible) uses the Amazon ECS v2
metadata and stats API endpoints to gather stats on running containers in a task. The Telegraf
container and the workload that Telegraf is inspecting must be run in the same task. This is similar
to (and reuses pieces of) the Docker Input Plugin, with some ECS-specific modifications for AWS
metadata and stats formats.
● Amazon Kinesis Consumer — The Amazon Kinesis Consumer Input Plugin reads from a Kinesis
data stream and creates metrics using one of the supported input data formats.
● Apache Aurora — The Aurora Input Plugin gathers metrics from Apache Aurora schedulers.
● Apache HTTP Server — The Apache HTTP Server Input Plugin collects server performance
information using the mod_status module of the Apache HTTP Server. Typically, the
mod_status module is configured to expose a page at the /server-status?auto location of
the Apache server. The ExtendedStatus option must be enabled in order to collect all available
fields. For information about how to configure your server reference, see the module
documentation.
● Apache Kafka Consumer — The Apache Kafka Consumer Input Plugin polls a specified Kafka
topic and adds messages to InfluxDB. Messages are expected in the line protocol format.
Consumer Group is used to talk to the Kafka cluster, so multiple instances of Telegraf can read
from the same topic in parallel.
● Apache Mesos — The Apache Mesos Input Plugin gathers metrics from Mesos.
● Apache Solr — The Apache Solr Input Plugin collects stats using the MBean Request Handler.
● Apache Tomcat — The Apache Tomcat Input Plugin collects statistics available from the Apache
Tomcat manager status page (http://<host>/manager/status/all?XML=true). Using XML=true returns
XML data. See the Apache Tomcat documentation for details on these statistics.
● Apache Zipkin — The Apache Zipkin Input Plugin implements the Zipkin HTTP server to gather
trace and timing data needed to troubleshoot latency problems in microservice architectures. This
plugin is experimental. Its data schema may be subject to change based on its main usage cases
and the evolution of the OpenTracing standard.
● Apache Zookeeper — The Apache Zookeeper Input Plugin collects variables output from the
mntr command Zookeeper Admin.
● Apcupsd — The Apcupsd Input Plugin reads data from an apcupsd daemon over its NIS network
protocol.
● Arista LANZ Consumer — The Arista LANZ Consumer Input Plugin provides a consumer for use
with Arista Networks’ Latency Analyzer (LANZ). Metrics are read from a stream of data via TCP
through port 50001 on the switch’s management IP. Data is in Protobuffers format. For more
information, see Arista LANZ.
● Azure Storage Queue — The Azure Storage Queue Plugin gathers sizes of Azure Storage

5 / 25
Queues.
● Bcache — The Bcache Input Plugin gets bcache statistics from the stats_total directory and
dirty_data file.
● Beat — The Beat Input Plugin collects metrics from the given Elastic Beat instances.
● Beanstalkd — The Beanstalkd Input Plugin collects server stats as well as tube stats (reported by
stats and stats-tube commands respectively).
● BIND 9 Nameserver Statistics — This plugin decodes the JSON or XML statistics provided by
BIND 9 nameservers.
● Bond — The Bond Input Plugin collects network bond interface status, bond’s slaves interfaces
status and failures count of bond’s slaves interfaces. The plugin collects these metrics from
/proc/net/bonding/* files.
● Burrow — The Burrow Input Plugin collects Apache Kafka topic, consumer, and partition status
using the Burrow HTTP Endpoint.
● Cassandra — Deprecated in Telegraf 1.7.0 in favor of the Jolokia2 Input Plugin. See
Jolokia2/Cassandra configurations.
● Ceph Storage — The Ceph Storage Input Plugin collects performance metrics from the MON and
OSD nodes in a Ceph storage cluster.
● CGroup — The CGroup Input Plugin captures specific statistics per cgroup.
● Chrony — The Chrony Input Plugin gets standard chrony metrics, requires chronyc executable.
● Cisco GNMI Telemetry — The inputs.cisco_telementry_gnmi plugin was renamed to
inputs.gmni in Telegraf 1.15.0 to better reflect its general support for gNMI devices. See the gNMI
Plugin. Cisco GNMI Telemetry Input Plugin consumes telemetry data similar to the GNMI
specification. This GRPC-based protocol can utilize TLS for authentication and encryption. This
plugin has been developed to support GNMI telemetry as produced by Cisco IOS XR (64-bit)
version 6.5.1 and later.
● Cisco Model-Driven Telemetry (MDT) — Cisco Model-Driven Telemetry (MDT) is an input plugin
that consumes telemetry data from Cisco IOS XR, IOS XE and NX-OS platforms. It supports TCP &
GRPC dialout transports. GRPC-based transport can utilize TLS for authentication and encryption.
Telemetry data is expected to be GPB-KV (self-describing-gpb) encoded.
● ClickHouse — The ClickHouse Input Plugin gathers statistics from a ClickHouse server, an open
source column-oriented database management system that lets you generate analytical data
reports in real time.
● Conntrack — The Conntrack Input Plugin collects stats from Netfilter’s conntrack-tools. The
conntrack-tools provide a mechanism for tracking various aspects of network connections as
they are processed by netfilter. At runtime, conntrack exposes many of those connection statistics
within /proc/sys/net. Depending on your kernel version, these files can be found in either
/proc/sys/net/ipv4/netfilter or /proc/sys/net/netfilter and will be prefixed with

6 / 25
either ip_ or nf_. This plugin reads the files specified in its configuration and publishes each one
as a field, with the prefix normalized to ip_.
● Consul — The Consul Input Plugin collects statistics about all health checks registered in the
Consul. It uses Consul API to query the data. It will not report telemetry data, but Consul can report
those stats already using StatsD protocol, if needed.
● Couchbase — The Couchbase Input Plugin reads per-node and per-bucket metrics from
Couchbase.
● CouchDB — The CouchDB Input Plugin gathers metrics of CouchDB using _stats endpoint.
● CPU — The CPU Input Plugin gathers metrics about CPU usage.
● CS:GO — The CS:GO Input Plugin gathers metrics from Counter-Strike: Global Offensive servers.
● Disk — The Disk Input Plugin gathers metrics about disk usage by mount point.
● DiskIO — The DiskIO Input Plugin gathers metrics about disk IO by device.
● Directory Monitoring — The Directory Monitoring Input Plugin monitors a single directory and
takes in each file placed in the directory. The plugin gathers all files in the directory at a
configurable interval, and parses the ones that haven’t been picked up yet.
● Disque — The Disque Input Plugin gathers metrics from one or more Disque servers.
● DMCache — The DMCache Input Plugin provides a native collection for dmsetup-based statistics
for dm-cache.
● DNS Query — The DNS Query Input Plugin gathers DNS query times in milliseconds - like Dig.
● Docker — The Docker Input Plugin uses the Docker Engine API to gather metrics on running
Docker containers. The Docker Input Plugin uses the Official Docker Client to gather stats from the
Engine API library documentation.
● Docker Log — The Docker Log Input Plugin uses the Docker Engine API to collect logs from
running Docker containers. The plugin uses the Official Docker Client to gather logs from the
Engine API. This plugin works only for containers with the local or json-file or journald logging
driver.
● Dovecot — The Dovecot Input Plugin uses the dovecot Stats protocol to gather metrics on
configured domains. For more information, see the Dovecot documentation.
● Elasticsearch — The Elasticsearch Input Plugin queries endpoints to obtain node and optionally
cluster-health or cluster-stats metrics.
● Ethtool — The Ethtool Plugin gathers ethernet device statistics. The network device and driver
determine what fields are gathered.
● Event Hub Consumer — The Event Hub Consumer Input Plugin provides a consumer for use
with Azure Event Hubs and Azure IoT Hub.
● Exec — The Exec Input Plugin parses supported Telegraf input data formats (line protocol, JSON,
Graphite, Value, Nagios, Collectd, and Dropwizard) into metrics. Each Telegraf metric includes the
measurement name, tags, fields, and timestamp.

7 / 25
● Execd — The Execd Input Plugin runs an external program as a daemon. Programs must output
metrics in an accepted Telegraf input data format on its standard output. Configure signal to send a
signal to the daemon running on each collection interval. The program output on standard error is
mirrored to the Telegraf log.
● Fail2ban — The Fail2ban Input Plugin gathers the count of failed and banned IP addresses using
fail2ban.
● Fibaro — The Fibaro Input Plugin makes HTTP calls to the Fibaro controller API to gather values of
hooked devices. Those values could be true (1) or false (0) for switches, percentage for dimmers,
temperature, etc.
● File — The File Input Plugin updates a list of files every interval and parses the contents using the
selected input data format. Files will always be read in their entirety. If you wish to tail or follow a
file, then use the Tail Input Plugin. To parse metrics from multiple files that are formatted in one of
the supported input data formats, use the Multifile Input Plugin.
● Filecount — The Filecount Input Plugin reports the number and total size of files in directories
that match certain criteria.
● Filestat — The Filestat Input Plugin gathers metrics about file existence, size, and other stats.
● Fireboard — The Fireboard Input Plugin gathers real-time temperature data from Fireboard
thermometers. To use this input plugin, sign up to use the Fireboard REST API.
● Fluentd — The Fluentd Input Plugin gathers Fluentd server metrics from the plugin endpoint
provided by in_monitor plugin. This plugin understands data provided by /api/plugin.json
resource (/api/config.json is not covered).
● GitHub — The GitHub Plugin gathers repository information from GitHub-hosted repositories.
● gNMI — The gNMI Plugin consumes telemetry data based on the gNMI Subscribe method. The
plugin supports TLS for authentication and encryption. This input plugin is vendor-agnostic and is
supported on any platform that supports the gNMI spec. For Cisco devices: The gNMI Plugin is
optimized to support gNMI telemetry as produced by Cisco IOS XR (64-bit) version 6.5.1, Cisco
NX-OS 9.3 and Cisco IOS XE 16.12 and later.
● Google Cloud PubSub — The Google Cloud PubSub Input Plugin ingests metrics from Google
Cloud PubSub and creates metrics using one of the supported input data formats.
● Google Cloud PubSub Push — The Google Cloud PubSub Push (cloud_pubsub_push) Input
Plugin listens for messages sent using HTTP POST requests from Google Cloud PubSub. The
plugin expects messages in Google’s Pub/Sub JSON Format ONLY. The intent of the plugin is to
allow Telegraf to serve as an endpoint of the Google Pub/Sub ‘Push’ service. Google’s PubSub
service will only send over HTTPS/TLS so this plugin must be behind a valid proxy or must be
configured to use TLS.
● Graylog — The Graylog Input Plugin can collect data from remote Graylog service URLs. This
plugin currently supports two types of endpoints: multiple (e.g.,

8 / 25
http://[graylog-server-ip]:12900/system/metrics/multiple) and
namespace (e.g., http://[graylog-server-ip]:12900/system/metrics/namespace/{namespace})
● HAProxy — The HAProxy Input Plugin gathers metrics directly from any running HAProxy
instance. It can do so by using CSV generated by HAProxy status page or from admin sockets.
● Hddtemp — The Hddtemp Input Plugin reads data from hddtemp daemons.
● HTTP — The HTTP Input Plugin collects metrics from one or more HTTP (or HTTPS) endpoints. The
endpoint should have metrics formatted in one of the supported input data formats. Each data
format has its own unique set of configuration options which can be added to the input
configuration.
● HTTP JSON — Deprecated in Telegraf 1.6.0. Use the HTTP Input Plugin. The HTTP JSON Input
Plugin collects data from HTTP URLs which respond with JSON. It flattens the JSON and finds all
numeric values, treating them as floats.
● HTTP Listener — The http_listener Input Plugin was renamed to influxdb_listener. The new name
better describes the intended use of the plugin as a InfluxDB relay. For general purpose transfer of
metrics in any format via HTTP, use http_listener_v2 instead.
● HTTP Listener v2 — The HTTP Listener v2 Input Plugin listens for metrics sent via HTTP. Metrics
may be sent in any supported Telegraf input data format. Note that the plugin previously known as
http_listener has been renamed influxdb_listener. To use Telegraf as a proxy/relay for InfluxDB, we
recommend using influxdb_listener.
● HTTP Response — The HTTP Response Input Plugin gathers metrics for HTTP responses. The
measurements and fields include response_time, http_response_code, and result_type.
Tags for measurements include server and method.
● Icinga 2 — The Icinga 2 Input Plugin gathers status on running services and hosts using the Icinga
2 API.
● InfiniBand — The InfiniBand Input Plugin gathers statistics for all InfiniBand devices and ports on
the system. Counters are stored in /sys/class/infiniband/<dev>/port/<port>/counters/.
● InfluxDB v1.x — The InfluxDB v1.x Input Plugin gathers metrics from the exposed InfluxDB v1.x
/debug/vars endpoint. Using Telegraf to extract these metrics to create a “monitor of monitors” is a
best practice and allows you to reduce the overhead associated with capturing and storing these
metrics locally within the _internal database for production deployments.
● InfluxDB v2 — InfluxDB 2.x exposes its metrics using the Prometheus Exposition Format — there
is no InfluxDB v2 input plugin.
● InfluxDB Listener — The InfluxDB Listener Input Plugin listens for requests sent according to the
InfluxDB HTTP API. The intent of the plugin is to allow Telegraf to serve as a proxy, or router, for the
HTTP /write endpoint of the InfluxDB HTTP API.
● InfluxDB v2 Listener — The InfluxDB v2 Listener Input Plugin listens for requests sent according
to the InfluxDB HTTP API. The intent of the plugin is to allow Telegraf to serve as a proxy, or router,

9 / 25
for the HTTP /api/v2/write endpoint of the InfluxDB HTTP API.
● Intel Powerstat — The Intel Powerstat Input Plugin collects information provided by the
monitoring features of Intel Powerstat.
● Intel RDT — The Intel RDT Input Plugin collects information provided by the monitoring features of
Intel Resource Director Technology (RDT).
● Interrupts — The Interrupts Input Plugin gathers metrics about IRQs, including interrupts (from
/proc/interrupts) and soft_interrupts (from /proc/softirqs).
● IPMI Sensor — The IPMI Sensor Input Plugin queries the local machine or remote host sensor
statistics using the ipmitool utility.
● Ipset — The Ipset Input Plugin gathers packets and bytes counters from Linux ipset. It uses the
output of the command ipset save. Ipsets created without the counters option are ignored.
● IPtables — The IPtables Input Plugin gathers packets and bytes counters for rules within a set of
table and chain from the Linux iptables firewall.
● IPVS — The IPVS Input Plugin uses the Linux kernel netlink socket interface to gather metrics
about IPVS virtual and real servers.
● Jenkins — The Jenkins Input Plugin gathers information about the nodes and jobs running in a
Jenkins instance. This plugin does not require a plugin on Jenkins and makes use of Jenkins API
to retrieve all the information needed.
● Jolokia — Deprecated in Telegraf 1.5.0. Use the Jolokia2 Input Plugin.
● Jolokia2 Agent — The Jolokia2 Agent Input Plugin reads JMX metrics from one or more Jolokia
agent REST endpoints using the JSON-over-HTTP protocol.
● Jolokia2 Proxy — The Jolokia2 Proxy Input Plugin reads JMX metrics from one or more targets
by interacting with a Jolokia proxy REST endpoint using the Jolokia JSON-over-HTTP protocol.
● JTI OpenConfig Telemetry — The JTI OpenConfig Telemetry Input Plugin reads Juniper
Networks implementation of OpenConfig telemetry data from listed sensors using the Junos
Telemetry Interface. Refer to openconfig.net for more details about OpenConfig and Junos
Telemetry Interface (JTI).
● Kapacitor — The Kapacitor Input Plugin collects metrics from the given Kapacitor instances.
● Kernel — The Kernel Input Plugin gathers kernel statistics from /proc/stat.
● Kernel VMStat — The Kernel VMStat Input Plugin gathers kernel statistics from /proc/vmstat.
● Kibana — The Kibana Input Plugin queries the Kibana status API to obtain the health status of
Kibana and some useful metrics.
● Kubernetes — The Kubernetes Input Plugin talks to the kubelet API using the /stats/summary
endpoint to gather metrics about the running pods and containers for a single host. It is assumed
that this plugin is running as part of a daemonset within a Kubernetes installation. This means that
Telegraf is running on every node within the cluster. Therefore, you should configure this plugin to
talk to its locally running kubelet. The Kubernetes Input Plugin is experimental.

10 / 25
● Kubernetes Inventory — The Kubernetes Inventory Input Plugin generates metrics derived from
the state of the following Kubernetes resources:
○ daemonsets
○ deployments
○ nodes
○ persistentvolumes
○ persistentvolumeclaims
○ pods (containers)
○ statefulsets
● LeoFS — The LeoFS Input Plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage
using SNMP. See System monitoring in the LeoFS documentation for more information.
● Linux Sysctl FS — The Linux Sysctl FS Input Plugin provides Linux system level file (sysctl fs)
metrics. The documentation on these fields can be found here.
● Logparser — The Logparser Input Plugin streams and parses the given log files. Currently, it has
the capability of parsing “grok” patterns from log files, which also supports regular expression
(regex) patterns.
● Logstash — The Logstash Input Plugin reads metrics exposed by the Logstash Monitoring API.
The plugin supports Logstash 5 and later.
● Lustre2 — Lustre Jobstats allows for RPCs to be tagged with a value, such as a job’s ID. This
allows for per job statistics. The Lustre2 Input Plugin collects statistics and tags the data with the
jobid.
● Mailchimp — The Mailchimp Input Plugin gathers metrics from the /3.0/reports MailChimp API.
● MarkLogic — The MarkLogic Input Plugin gathers health status metrics from one or more
MarkLogic hosts.
● Mcrouter — The Mcrouter Input Plugin gathers statistics data from a mcrouter instance. Mcrouter
is a memcached protocol router, developed and maintained by Facebook, for scaling memcached
deployments. It’s a core component of cache infrastructure at Facebook and Instagram where
mcrouter handles almost 5 billion requests per second at peak.
● Mem — The Mem Input Plugin collects system memory metrics. For a more complete explanation
of the difference between used and actual_used RAM, see Linux ate my ram.
● Memcached — The Memcached Input Plugin gathers statistics data from a Memcached server.
● Mesosphere DC/OS — The Mesosphere DC/OS Input Plugin gathers metrics from a DC/OS
cluster’s metrics component.
● Microsoft SQL Server — The Microsoft SQL Server Input Plugin provides metrics for your
Microsoft SQL Server instance. It currently works with SQL Server versions 2008+. Recorded
metrics are lightweight and the input plugin uses the RCON protocol to collect statistics from a
scoreboard on a Minecraft server.

11 / 25
● Modbus — The Modbus Input Plugin collects discrete_inputs, coils, input_registers and
holding_registers via Modbus TCP or Modbus RTU/ASCII.
● MongoDB — The MongoDB Input Plugin collects MongoDB stats exposed by serverStatus and
few more and creates a single measurement containing values.
● Monit — The Monit Input Plugin gathers metrics and status information about local processes,
remote hosts, files, file systems, directories, and network interfaces managed and watched by
Monit. To use this plugin, enable the HTTPD TCP port in Monit.
● MQTT Consumer — The MQTT Consumer Input Plugin reads from specified MQTT topics and
adds messages to InfluxDB. Messages are in the Telegraf input data formats.
● Multifile — The Multifile Input Plugin allows Telegraf to combine data from multiple files into a
single metric, creating one field or tag per file. This is often useful for creating custom metrics from
the /sys or /proc filesystems. To parse metrics from a single file formatted in one of the supported
input data formats, use the File Input Plugin.
● MySQL — The MySQL Input Plugin gathers the statistics data from MySQL, MariaDB, and Percona
servers.
● NATS Consumer — The NATS Consumer Input Plugin reads from specified NATS subjects and
adds messages to InfluxDB. Messages are expected in the Telegraf input data formats. A Queue
Group is used when subscribing to subjects so multiple instances of Telegraf can read from a NATS
cluster in parallel.
● NATS Server Monitoring — The NATS Server Monitoring Input Plugin gathers metrics when
using the NATS Server monitoring server.
● Neptune Apex — The Neptune Apex Input Plugin collects real-time data from the Apex status.xml
page. The Neptune Apex controller family allows an aquarium hobbyist to monitor and control their
tanks based on various probes. The data is taken directly from the /cgi-bin/status.xml at the
interval specified in the telegraf.conf configuration file.
● Net — The Net Input Plugin gathers metrics about network interface usage (Linux only).
● Netstat — The Netstat Input Plugin gathers TCP metrics such as established, time-wait and
sockets counts by using lsof.
● Network Response — The Network Response Input Plugin tests UDP and TCP connection
response time. It can also check response text.
● NFS — The NFS Input Plugin collects data from an NFS Client per-mount statistics
(/proc/self/mountstats). By default, the plugin collects only a limited number of general system-level
metrics.
● NGINX — The NGINX Input Plugin reads NGINX basic status information
(ngx_http_stub_status_module).
● NGINX VTS — The NGINX VTS Input Plugin gathers NGINX status using external virtual host traffic
status module - https://github.com/vozlt/nginx-module-vts. This is an NGINX module that provides

12 / 25
access to virtual host status information. It contains the current status such as servers, upstreams,
caches. This is similar to the live activity monitoring of NGINX Plus.
● NGINX Plus — The NGINX Plus Input Plugin is for NGINX Plus, the commercial version of the open
source web server NGINX. To use this plugin you will need a license.
● NGINX Plus API — The NGINX Plus API Input Plugin gathers advanced status information for
NGINX Plus servers.
● NGINX Stream STS — The NGINX Stream STS Input Plugin gathers NGINX status using external
virtual host traffic status.
● NGINX Upstream Check — The NGINX Upstream Check Input Plugin reads the status output of
the nginx_upstream_check. This module can periodically check the NGINX upstream servers
using the configured request and interval to determine if the server is still available. If checks are
failed, then the server is marked as down and will not receive any requests until the check passes
and the server will be marked as up again. The status page displays the current status of all
upstreams and servers as well as number of the failed and successful checks. This information can
be exported in JSON format and parsed by this input.
● NSD — The NSD Input Plugin collects metrics from NSD, an authoritative DNS name server.
● NSQ — The NSQ Input Plugin collects metrics from NSQD API endpoints.
● NSQ Consumer — The NSQ Consumer Input Plugin polls a specified NSQD topic and adds
messages to InfluxDB. This plugin allows a message to be in any of the supported data_format
types.
● Nstat — The Nstat Input Plugin collects network metrics from /proc/net/netstat,
/proc/net/snmp, and /proc/net/snmp6 files.
● NTPq — The NTPq Input Plugin gets standard NTP query metrics and requires ntpq executable.
● NVIDIA SMI — The NVIDIA SMI Input Plugin uses a query on the NVIDIA System Management
Interface (nvidia-smi) binary to pull GPU stats including memory and GPU usage, temp, and others.
● Octoprint — The Octoprint Input Plugin gathers metrics from the Octoprint API.
● OPC UA — The OPC UA Plugin gathers metrics from client devices using the OPC Foundation’s
Unified Architecture (UA) machine-to-machine communication protocol for industrial automation.
● OpenLDAP — The OpenLDAP Input Plugin gathers metrics from OpenLDAP’s cn=Monitor
backend.
● OpenNTPD — The OpenNTPD Input Plugin gathers standard Network Time Protocol (NTP) query
metrics from OpenNTPD using the ntpctl command.
● OpenSMTPD — The OpenSMTPD Input Plugin gathers stats from OpenSMTPD, a free
implementation of the server-side SMTP protocol.
● OpenWeatherMap — Collect current weather and forecast data from OpenWeatherMap.
● PF — The PF Input Plugin gathers information from the FreeBSD/OpenBSD pf firewall. Currently, it
can retrieve information about the state table: the number of current entries in the table, and

13 / 25
counters for the number of searches, inserts, and removals to the table. The PF Input Plugin
retrieves this information by invoking the pfstat command.
● PgBouncer — The PgBouncer Input Plugin provides metrics for your PgBouncer load balancer.
For information about the metrics, see the PgBouncer documentation.
● Phusion Passenger — The Phusion Passenger Input Plugin gets Phusion Passenger statistics
using their command line utility passenger-status.
● PHP-FPM — The PHP-FPM Input Plugin gets phpfpm statistics using either HTTP status page or
fpm socket.
● Ping — The Ping Input Plugin measures the round-trip for ping commands, response time, and
other packet statistics.
● Plex Webhook — The Plex Webhook Input Plugin listens for events from Plex Media Server
Webhooks.
● Postfix — The Postfix Input Plugin reports metrics on the postfix queues. For each of the active,
hold, incoming, maildrop, and deferred queues, it will report the queue length (number of items),
size (bytes used by items), and age (age of oldest item in seconds).
● PostgreSQL — The PostgreSQL Input Plugin provides metrics for your PostgreSQL database. It
currently works with PostgreSQL versions 8.1+. It uses data from the built-in pg_stat_database
and pg_stat_bgwriter views. The metrics recorded depend on your version of PostgreSQL.
● PostgreSQL Extensible — This PostgreSQL Extensible Input Plugin provides metrics for your
Postgres database. It has been designed to parse SQL queries in the plugin section of telegraf.conf
files.
● PowerDNS — The PowerDNS Input Plugin gathers metrics about PowerDNS using UNIX sockets.
● PowerDNS Recursor — The PowerDNS Recursor Input Plugin gathers metrics about PowerDNS
Recursor using UNIX sockets.
● Processes — The Processes Input Plugin gathers info about the total number of processes and
groups them by status (zombie, sleeping, running, etc.). On Linux, this plugin requires access to
procfs (/proc); on other operating systems, it requires access to execute ps.
● Procstat — The Procstat Input Plugin monitors system resource usage of individual processes
using their /proc data. Processes can be specified either by pid file, by executable name, by
command line pattern matching, by username, by systemd unit name, or by cgroup name/path (in
this order or priority). This plugin uses pgrep when an executable name is provided to obtain the
pid. The Procstat Input Plugin transmits IO, memory, cpu, file descriptor-related measurements for
every process specified. A prefix can be set to isolate individual process specific measurements.
The Procstat Input Plugin will tag processes according to how they are specified in the
configuration. If a pid file is used, a “pidfile” tag will be generated. On the other hand, if an
executable is used, an “exe” tag will be generated.
● Prometheus Format — The Prometheus Format Input Plugin gathers metrics from HTTP servers

14 / 25
exposing metrics in Prometheus format.
● Proxmox — The Proxmox Plugin gathers metrics about containers and VMs using the Proxmox
API.
● Puppet Agent — The Puppet Agent Input Plugin collects variables outputted from the
last_run_summary.yaml file usually located in /var/lib/puppet/state/ Puppet Agent
Runs.
● RabbitMQ — The RabbitMQ Input Plugin reads metrics from RabbitMQ servers via the
Management Plugin.
● Raindrops Middleware — The Raindrops Middleware Input Plugin reads from the specified
Raindrops middleware URI and adds the statistics to InfluxDB.
● RAS — The RAS Input Plugin gathers and counts errors provided by RASDaemon, a RAS (reliability,
availability, and serviceability) logging tool.
● RavenDB — The RavenDB Input Plugin reads metrics from RavenDB.
● Redfish — The Redfish Input Plugin gathers metrics and status information of hardware servers for
which DMTF’s Redfish is enabled.
● Redis — The Redis Input Plugin gathers the results of the INFO Redis command. There are two
separate measurements: redis and redis_keyspace. The latter is used for gathering
database-related statistics. Additionally, the plugin also calculates the hit/miss ratio
(keyspace_hitrate) and elapsed time since the last RDB save (rdb_last_save_time_elapsed).
● RethinkDB — The RethinkDB Input Plugin works with RethinkDB 2.3.5+ databases that require
username, password authorization, and Handshake protocol v1.0.
● Riak — The Riak Input Plugin gathers metrics from one or more Riak instances.
● Riemann Listener — The Riemann Listener Input Plugin listens for messages from Riemann
clients using Riemann-Protobuff format.
● Salesforce — The Salesforce Input Plugin gathers metrics about the limits in your Salesforce
organization and the remaining usage. It fetches its data from the limits endpoint of the Salesforce
REST API.
● Sensors — The Sensors Input Plugin collects sensor metrics with the sensors executable from the
lm-sensor package.
● SFlow — The SFlow Input Plugin provides support for acting as an SFlow V5 collector in
accordance with the sflow.org specification.
● SMCIPMITool — The SMCIPMITool Input Plugin parses the output of SMCIPMITool.
● S.M.A.R.T. — The SMART Input Plugin gets metrics using the command line utility smartctl for
SMART (Self-Monitoring, Analysis and Reporting Technology) storage devices. SMART is a
monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs),
which include most modern ATA/SATA, SCSI/SAS and NVMe disks. The plugin detects and reports
on various indicators of drive reliability, with the intent of enabling the anticipation of hardware

15 / 25
failures. See smartmontools.
● SNMP — The SNMP Input Plugin gathers metrics from SNMP agents.
● SNMP Legacy — The SNMP Legacy Input Plugin gathers metrics from SNMP agents. Deprecated
in Telegraf 1.0.0. Use the SNMP Input Plugin.
● SNMP Trap — The SNMP Trap Plugin receives SNMP notifications (traps and inform requests).
Notifications are received over UDP on a configurable port. Resolve OIDs to strings using system
MIB files ( just like with the SNMP Input Plugin).
● Socket Listener — The Socket Listener Input Plugin listens for messages from streaming (TCP,
UNIX) or datagram (UDP, unixgram) protocols. Messages are expected in the Telegraf Input Data
Formats.
● Stackdriver — The Stackdriver Input Plugin gathers metrics from the Stackdriver Monitoring API.
This plugin accesses APIs that are chargeable. You may incur costs.
● StatsD — The StatsD Input Plugin is a special type of plugin which runs a backgrounded statsd
listener service while Telegraf is running. StatsD messages are formatted as described in the
original etsy statsd implementation.
● Suricata — The Suricata Input Plugin reports internal performance counters of the Suricata
IDS/IPS engine, such as captured traffic volume, memory usage, uptime, flow counters, and more. It
provides a socket for the Suricata log output to write JSON output to and processes the incoming
data to fit Telegraf’s format.
● Swap — Supports: Linux only. The Swap Input Plugin gathers metrics about swap memory usage.
For more information about Linux swap spaces, see All about Linux swap space.
● Synproxy — The Synproxy Plugin gathers synproxy metrics. Synproxy is a Linux netfilter module
used for SYN attack mitigation.
● Syslog — The Syslog Input Plugin listens for syslog messages transmitted over UDP or TCP.
Syslog messages should be formatted according to RFC 5424.
● Sysstat — The Sysstat Input Plugin collects sysstat system metrics with the sysstat collector utility
sadc and parses the created binary data file with the sadf utility.
● System — The System Input Plugin gathers general stats on system load, uptime, and number of
users logged in. It is basically equivalent to the UNIX uptime command.
● SystemD Timings — The SystemD Timings Plugin collects systemd boot timing metrics.
● Systemd Units — The Systemd Units Plugin gathers systemd unit status metrics on Linux. It relies
on systemctl list-units --all --type=service to collect data on service status. Results are
tagged with the unit name and provide enumerated fields for loaded, active, and running fields,
indicating the unit health. This plugin can gather other unit types as well. See systemctl list-units
--all --type help for possible options. This plugin is related to the Windows Services Input
Plugin, which fulfills the same purpose on Windows.
● Tail — The Tail Input Plugin “tails” a log file and parses each log message.

16 / 25
● TCP Listener — Deprecated in Telegraf 1.3.0. Use the Socket Listener Input Plugin.
● Teamspeak 3 — The Teamspeak 3 Input Plugin uses the Teamspeak 3 ServerQuery interface of
the Teamspeak server to collect statistics of one or more virtual servers.
● Telegraf v1.x — The Telegraf v1.x Input Plugin collects metrics about the Telegraf v1.x agent itself.
Note that some metrics are aggregates across all instances of one type of plugin.
● Temp — The Temp Input Plugin collects temperature data from sensors.
● Tengine Web Server — The Tengine Web Server Input Plugin gathers status metrics from the
Tengine Web Server using the Reqstat module.
● Trig — The Trig Input Plugin inserts sine and cosine waves for demonstration purposes.
● Twemproxy — The Twemproxy Input Plugin gathers data from Twemproxy instances, processes
Twemproxy server statistics, processes pool data, and processes backend server
(Redis/Memcached) statistics.
● UDP Listener — Deprecated in Telegraf 1.3.0. Use the Socket Listener Input Plugin.
● Unbound — The Unbound Input Plugin gathers statistics from Unbound, a validating, recursive,
and caching DNS resolver.
● uWSGI — The uWSGI Input Plugin gathers metrics about uWSGI using the uWSGI Stats Server.
● Varnish — The Varnish Input Plugin gathers stats from Varnish HTTP Cache.
● VMware vSphere — The VMware vSphere Input Plugin uses the vSphere API to gather metrics
from multiple vCenter servers (clusters, hosts, VMs, and data stores). For more information on the
available performance metrics, see Common vSphere Performance Metrics.
● Webhooks — The Webhooks Input Plugin starts an HTTPS server and registers multiple webhook
listeners. Available webhooks Filestack, GitHub, Mandrill, Rollbar, Papertrail, Particle.
● Windows Performance Counters — The Windows Performance Counters Input Plugin reads
Performance Counters on the Windows operating system. Windows only.
● Windows Eventlog — The Windows Eventlog Input Plugin reports Windows event logging.
Windows Vista and later only.
● Windows Services — The Windows Services Input Plugin reports Windows services info.
Windows only.
● Wireless — The Wireless Input Plugin gathers metrics about wireless link quality by reading the
/proc/net/wireless file. This plugin currently supports Linux only.
● Wireguard — The Wireguard Input Plugin collects statistics on the local Wireguard server using
the wgctrl library. Reports gauge metrics for Wireguard interface device(s) and its peers.
● X.509 Certificate — The X.509 Certificate Input Plugin provides information about X.509
certificates accessible using the local file or network connection.
● YouTube — The YouTube Input Plugin gathers information from YouTube channels, including
views, subscribers, and videos.
● ZFS — Supports FreeBSD, Linux. The ZFS Input Plugin provides metrics from your ZFS filesystems.

17 / 25
It supports ZFS on Linux and FreeBSD. It gets ZFS statistics from /proc/spl/kstat/zfs on
Linux and from sysctl and zpool on FreeBSD.

Output plugins
● Amazon CloudWatch — The Amazon CloudWatch Output Plugin sends metrics to Amazon
CloudWatch.
● Amazon Kinesis — The Amazon Kinesis Output Plugin is an experimental plugin that is still in the
early stages of development. It will batch up all of the points into one PUT request to Kinesis. This
should save the number of API requests by a considerable level.
● AWS Timestream — The AWS Timestream Output Plugin writes metrics to the Amazon
Timestream service.
● Amon — The Amon Output Plugin writes metrics to an Amon server. If the point value being sent
cannot be converted to a float64 value, the metric is skipped. Metrics are grouped by converting
any `_` characters to `.` in the Point Name.
● AMQP — The AMQP Output Plugin writes to an AMQP 0-9-1 exchange, a prominent
implementation of the Advanced Message Queuing Protocol (AMQP) protocol being RabbitMQ.
Metrics are written to a topic exchange using tag, defined in configuration file as RoutingTag, as a
routing key.
● Apache Kafka — The Apache Kafka Output Plugin writes to a Kafka Broker acting as a Kafka
Producer.
● BigQuery — The BigQuery Output Plugin writes to Google Cloud’s BigQuery.
● CrateDB — The CrateDB Output Plugin writes to CrateDB, a real-time SQL database for machine
data and IoT, using its PostgreSQL protocol.
● Datadog — The Datadog Output Plugin writes to the Datadog Metrics API and requires an apikey
which can be obtained here for the account.
● Discard — The Discard Output Plugin simply drops all metrics that are sent to it. It is only meant to
be used for testing purposes.
● Dynatrace — The Dynatrace Output Plugin sends metrics to Dynatrace.
● Elasticsearch — The Elasticsearch Output Plugin writes to Elasticsearch via HTTP using Elastic. It
supports Elasticsearch releases from 5.x up to 7.x.
● Exec — The Exec Output Plugin sends Telegraf metrics to an external application over stdin.
● Execd — The Execd Output Plugin runs an external program as a daemon.
● File — The File Output Plugin writes Telegraf metrics to files.
● Google Cloud PubSub — The Google PubSub Output Plugin publishes metrics to a Google
Cloud PubSub topic as one of the supported output data formats.
● Graphite — The Graphite Output Plugin writes to Graphite via raw TCP.
● Grafani Loki — The Grafana Loki Output Plugin sends logs to Loki.

18 / 25
● Graylog — The Graylog Output Plugin writes to a Graylog instance using the gelf format.
● HTTP — The HTTP Output Plugin sends metrics in a HTTP message encoded using one of the
output data formats. For data_formats that support batching, metrics are sent in batch format.
● Health — The Health Plugin provides a HTTP health check resource that can be configured to
return a failure status code based on the value of a metric. When the plugin is healthy it will return
a 200 response; when unhealthy it will return a 503 response. The default state is healthy; one or
more checks must fail in order for the resource to enter the failed state.
● InfluxDB v1.x — The InfluxDB v1.x Output Plugin writes to InfluxDB using HTTP or UDP.
● InfluxDB v2 — The InfluxDB v2 Output Plugin writes metrics to InfluxDB 2.0 and 3.0.
● Instrumental — The Instrumental Output Plugin writes to the Instrumental Collector API and
requires a Project-specific API token. Instrumental accepts stats in a format very close to Graphite,
with the only difference being that the type of stat (gauge, increment) is the first token, separated
from the metric itself by whitespace. The increment type is only used if the metric comes in as a
counter through [[inputs.statsd]].
● Librato — The Librato Output Plugin writes to the Librato Metrics API and requires an api_user
and api_token which can be obtained here for the account.
● Logz.io — The Logz.io Output Plugin sends metrics to Logz.io over HTTPs.
● Microsoft Azure Application Insights — The Microsoft Azure Application Insights Output
Plugin writes Telegraf metrics to Application Insights (Microsoft Azure).
● Microsoft Azure Monitor — The Azure Monitor custom metrics service is currently in preview
and not available in a subset of Azure regions. The Microsoft Azure Monitor Output Plugin sends
custom metrics to Microsoft Azure Monitor. Azure Monitor has a metric resolution of one minute. To
handle this in Telegraf, the Azure Monitor Output Plugin automatically aggregates metrics into
one-minute buckets, which are then sent to Azure Monitor on every flush interval. For a Microsoft
blog post on using Telegraf with Microsoft Azure Monitor, see Collect custom metrics for a Linux
VM with the InfluxData Telegraf Agent. The metrics from each input plugin will be written to a
separate Azure Monitor namespace, prefixed with Telegraf/ by default. The field name for each
metric is written as the Azure Monitor metric name. All field values are written as a summarized set
that includes min, max, sum, and count. Tags are written as a dimension on each Azure Monitor
metric.
● MQTT Producer — The MQTT Producer Output Plugin writes to the MQTT server using supported
output data formats.
● NATS Output — The “NATS Output” Output Plugin writes to a (list of) specified NATS instance(s).
● New Relic — The New Relic Output Plugin writes to New Relic insights using the Metric API.
● NSQ — The NSQ Output Plugin writes to a specified NSQD instance, usually local to the producer.
It requires a server name and a topic name.
● OpenTSDB — The OpenTSDB Output Plugin writes to an OpenTSDB instance using either the

19 / 25
telnet or HTTP mode. Using the HTTP API is the recommended way of writing metrics since
OpenTSDB 2.0. To use HTTP mode, set useHttp to true in config. You can also control how many
metrics are sent in each HTTP request by setting batchSize in config. See the OpenTSDB
documentation for details.
● Prometheus Client — The Prometheus Client Output Plugin starts a Prometheus Client; it
exposes all metrics on /metrics (default) to be polled by a Prometheus server.
● Riemann — The Riemann Output Plugin writes to Riemann using TCP or UDP.
● Sensu — The Sensu Output Plugin writes metrics events to Sensu Go.
● SignalFX — The SignalFX Output Plugin sends metrics to SignalFX.
● Socket Writer — The Socket Writer Output Plugin writes to a UDP, TCP, or UNIX socket. It can
output data in any of the supported output formats.
● Stackdriver — The Stackdriver Output Plugin writes to the Google Cloud Stackdriver API and
requires Google Cloud authentication with Google Cloud using either a service account or user
credentials. For details on pricing, see the Stackdriver documentation. Requires project to specify
where Stackdriver metrics will be delivered to. Metrics are grouped by the namespace variable and
metric key, for example custom.googleapis.com/telegraf/system/load5.
● Sumo Logic — This plugin sends metrics to Sumo Logic HTTP Source in HTTP messages using
one of the following supported data formats
○ graphite - for Content-Type of application/vnd.sumologic.graphite
○ carbon2 - for Content-Type of application/vnd.sumologic.carbon2
○ prometheus - for Content-Type of application/vnd.sumologic.prometheus
● Syslog — The Syslog Output Plugin sends syslog messages transmitted over UDP or TCP or TLS,
with or without the octet counting framing. Syslog messages are formatted according to RFC 5424.
● Warp10 — The Warp10 Output Plugin writes metrics to SenX Warp 10.
● Wavefront — The Wavefront Output Plugin writes to a Wavefront proxy, in Wavefront data format
over TCP.
● XML — The XML Parser Plugin parses an XML string into metric fields using XPath expressions.
● Yandex Cloud Monitoring — The Yandex Cloud Monitoring Output Plugin sends custom metrics
to Yandex Cloud Monitoring.

Aggregator plugins
Telegraf aggregator plugins create aggregate metrics (for example, mean, min, max, quantiles, etc.).
● BasicStats — The BasicStats Aggregator Plugin gives count, max, min, mean, s2(variance), and
stdev for a set of values, emitting the aggregate every period seconds.
● Derivative — The Derivative Aggregator Plugin estimates the derivative for all fields of the
aggregated metrics.
● Final — The Final Aggregator Plugin emits the last metric of a contiguous series. A contiguous

20 / 25
series is defined as a series which receives updates within the time period in series_timeout. The
contiguous series may be longer than the time interval defined by period. This is useful for getting
the final value for data sources that produce discrete time series, such as procstat, cgroup,
kubernetes, etc.
● Histogram — The Histogram Aggregator Plugin creates histograms containing the counts of field
values within a range. Values added to a bucket are also added to the larger buckets in the
distribution. This creates a cumulative histogram. Like other Telegraf aggregator plugins, the metric
is emitted every period seconds. Bucket counts, however, are not reset between periods and will
be non-strictly increasing while Telegraf is running.
● Merge — The Merge Aggregator Plugin merges metrics together and generates line protocol with
multiple fields per line. This optimizes memory and network transfer efficiency. Use this plugin
when fields are split over multiple lines of line protocol with the same measurement, tag set, and
timestamp on each.
● MinMax — The MinMax Aggregator Plugin aggregates min and max values of each field it sees,
emitting the aggregate every period seconds.
● Quantile — The Quantile Aggregator Plugin aggregates specified quantiles for each numeric field
per metric it sees and emits the quantiles every designated period.
● ValueCounter — The ValueCounter Aggregator Plugin counts the occurrence of values in fields
and emits the counter once every ‘period’ seconds. A use case for the ValueCounter
aggregator plugin is when you are processing an HTTP access log with the Logparser Input Plugin
and want to count the HTTP status codes. The counted fields must be configured with the fields
configuration directive. When no fields are provided, the plugin will not count any fields. The
results are emitted in fields, formatted as originalfieldname_fieldvalue = count. ValueCounter only
works on fields of the type int, bool, or string. Float fields are being dropped to prevent the
creation of too many fields.

Processor plugins
Telegraf output plugins transform, decorate, and filter metrics.
● AWS EC2 Metadata — The AWS EC2 Metadata Processor Plugin appends metadata gathered
from AWS IMDS to metrics associated with EC2 instances.
● Converter — The Converter Processor Plugin is used to change the type of tag or field values. In
addition to changing field types, it can convert between fields and tags. Values that cannot be
converted are dropped.
● Clone — The Clone Processor Plugin creates a copy of each metric to preserve the original metric
and allow modifications in the copied metric.
● Date — The Date Processor Plugin adds the metric timestamp as a human readable tag.
● Dedup — The Dedup Processor Plugin filters metrics whose field values are exact repetitions of

21 / 25
the previous values.
● Defaults — The Defaults Processor Plugin allows you to ensure certain fields will always exist with
a specified default value on your metrics.
● Enum — The Enum Processor Plugin allows the configuration of value mappings for metric fields.
The main use case for this is to rewrite status codes such as red, amber, and green by numeric
values such as 0, 1, 2. The plugin supports string and bool types for the field values. Multiple Fields
can be configured with separate value mappings for each field. Default mapping values can be
configured to be used for all values, which are not contained in the value_mappings. The
processor supports explicit configuration of a destination field. By default the source field is
overwritten.
● Execd — The Execd Processor Plugin runs an external program as a separate process. It pipes
metrics into the process’s STDIN and reads processed metrics from its STDOUT.
● Filepath — The Filepath Processor Plugin maps certain Go functions from path/filepath onto tag
and field values.
● GeoIP — The GeoIP Processor Plugin looks up IP addresses in the MaxMind GeoLite2 database
and adds the respective ISO country code, city name, latitude, and longitude.
● Network Interface Name — The Network Interface Name Processor Plugin looks up network
interface names using SNMP.
● Override — The Override Processor Plugin allows overriding all modifications that are supported
by input plugins and aggregator plugins:
○ name_override
○ name_prefix
○ name_suffix
○ tags
○ All metrics passing through this processor will be modified accordingly. Select the metrics
to modify using the standard measurement filtering options.
○ Values of name_override, name_prefix, name_suffix, and already present tags with
conflicting keys will be overwritten. Absent tags will be created.
○ Use cases of this plugin encompass ensuring certain tags or naming conventions are
adhered to irrespective of input plugin configurations, e.g., by taginclude.
● Parser — The Parser Processor Plugin parses defined fields containing the specified data format
and creates new metrics based on the contents of the field.
● Pivot — The Pivot Processor Plugin rotates single-valued metrics into a multi-field metric. This
transformation often results in data that is easier to use with mathematical operators and
comparisons. It also flattens data into a more compact representation for write operations with
some output data formats. To perform the reverse operation, use the Unpivot processor.
● Port Name Lookup — The Port Name Lookup Processor Plugin converts a tag containing a

22 / 25
well-known port number to the registered service name.
● Printer — The Printer Processor Plugin simply prints every metric passing through it.
● Regex — The Regex Processor Plugin transforms tag and field values using a regular expression
(regex) pattern. If result_key parameter is present, it can produce new tags and fields from
existing ones.
● Rename — The Rename Processor Plugin renames InfluxDB measurements, fields, and tags.
● Reverse DNS — The Reverse DNS Processor Plugin does a reverse-dns lookup on tags (or fields)
with IPs in them.
● S2 Geo — The S2 Geo Processor Plugin adds tags with an S2 cell ID token of a specified cell level.
Tags are used in Flux experimental/geo functions. Specify lat and lon field values with WGS-84
coordinates in decimal degrees.
● Starlark — The Starlark Processor Plugin calls a Starlark function for each matched metric,
allowing for custom programmatic metric processing.
● Strings — The Strings Processor Plugin maps certain Go string functions onto InfluxDB
measurement, tag, and field values. Values can be modified in place or stored in another key.
Implemented functions are:
○ lowercase
○ uppercase
○ trim
○ trim_left
○ trim_right
○ trim_prefix
○ trim_suffix
○ Note that in this implementation these are processed in the order that they appear above.
You can specify the measurement, tag, or field that you want processed in each section
and optionally a dest if you want the result stored in a new tag or field. You can specify lots
of transformations on data with a single strings processor.
● Tag Limit — The Tag Limit Processor Plugin preserves only a certain number of tags for any given
metric and chooses the tags to preserve when the number of tags appended by the data source is
over the limit. This can be useful when dealing with output systems (e.g. Stackdriver) that impose
hard limits on the number of tags or labels per metric or where high levels of cardinality are
computationally or financially expensive.
● Template — The Template Processor Plugin applies a Go template to metrics to generate a new
tag. Primarily used to create a tag for dynamic routing to multiple output plugins or to an output
specific routing option. The template has access to each metric’s measurement name, tags, fields,
and timestamp using the interface in template_metric.go.
● TopK — The TopK Processor Plugin is a filter designed to get the top series over a period of time.

23 / 25
It can be tweaked to do its top K computation over a period of time, so spikes can be smoothed
out. This processor goes through the following steps when processing a batch of metrics:
○ Groups metrics in buckets using their tags and name as key.
○ Aggregates each of the selected fields for each bucket by the selected aggregation
function (sum, mean, etc.).
○ Orders the buckets by one of the generated aggregations, returns all metrics in the top K
buckets, then reorders the buckets by the next of the generated aggregations, returns all
metrics in the top K buckets, etc, etc, etc, until it runs out of fields.
○ The plugin makes sure not to duplicate metrics.
○ Note that depending on the amount of metrics on each computed bucket, more than K
metrics may be returned.
● Unpivot — The Unpivot Processor Plugin rotates a multi-field series into single-valued metrics.
This transformation often results in data that is easier to aggregate across fields. To perform the
reverse operation, use the Pivot processor.

24 / 25
InfluxDB documentation, downloads & guides

Get InfluxDB

Try InfluxDB Cloud for Free

Get documentation

Additional tech papers

Join the InfluxDB community

25 / 25

You might also like