Splunk-7 2 1-Admin
Splunk-7 2 1-Admin
1
Generated: 12/14/2018 10:19 am
i
Table of Contents
Administer Splunk Enterprise with the command line interface (CLI)
Use the CLI to administer a remote Splunk Enterprise instance...............82
Customize the CLI login banner................................................................84
ii
Table of Contents
Administer the app key value store
KV store troubleshooting tools................................................................171
Manage users...................................................................................................193
About users and roles.............................................................................193
Configure user language and locale.......................................................194
Configure user session timeouts.............................................................196
iii
Table of Contents
Configuration file reference
default.meta.conf.....................................................................................306
default-mode.conf...................................................................................308
deployment.conf......................................................................................310
deploymentclient.conf.............................................................................311
distsearch.conf........................................................................................318
eventdiscoverer.conf...............................................................................332
event_renderers.conf..............................................................................334
eventtypes.conf.......................................................................................336
fields.conf................................................................................................339
health.conf...............................................................................................342
indexes.conf............................................................................................348
inputs.conf...............................................................................................404
instance.cfg.conf.....................................................................................480
limits.conf................................................................................................482
literals.conf..............................................................................................565
macros.conf.............................................................................................566
messages.conf........................................................................................569
multikv.conf.............................................................................................574
outputs.conf.............................................................................................579
passwords.conf.......................................................................................612
procmon-filters.conf.................................................................................614
props.conf...............................................................................................615
pubsub.conf.............................................................................................654
restmap.conf...........................................................................................656
savedsearches.conf................................................................................666
searchbnf.conf.........................................................................................691
segmenters.conf......................................................................................695
server.conf..............................................................................................698
serverclass.conf......................................................................................796
serverclass.seed.xml.conf.......................................................................808
setup.xml.conf.........................................................................................810
source-classifier.conf..............................................................................815
sourcetypes.conf.....................................................................................817
splunk-launch.conf..................................................................................819
tags.conf..................................................................................................824
telemetry.conf..........................................................................................826
times.conf................................................................................................829
transactiontypes.conf..............................................................................834
iv
Table of Contents
Configuration file reference
transforms.conf.......................................................................................839
ui-prefs.conf............................................................................................865
ui-tour.conf..............................................................................................868
user-prefs.conf........................................................................................873
user-seed.conf........................................................................................876
viewstates.conf........................................................................................878
visualizations.conf...................................................................................880
web.conf..................................................................................................884
wmi.conf..................................................................................................914
workflow_actions.conf.............................................................................922
workload_pools.conf...............................................................................928
workload_rules.conf................................................................................932
v
Welcome to Splunk Enterprise
administration
Unless otherwise stated, tasks and processes in this manual are suitable for both
Windows and *nix operating systems.
For a list and simple description of the other manuals available to Splunk users,
see "Other manuals for the Splunk administrator".
1
Use the Splunk An overview of how to use the Command Line
command line Interface to configure Splunk. See "About the CLI" for
interface (CLI) more information.
to configure
and administer
Splunk
Some Windows-specific things you should know
Optimize about working with Splunk, including some tips for
Splunk on optimal deployment and information about working
Windows with system images. See "Introduction for Windows
admins" for more information.
Install your license then go here to learn everything
Learn about
you need to know about Splunk licenses: "Manage
Splunk licenses
Splunk licenses" for more information.
An introduction and overview of Splunk Apps and
Get familiar
how you might integrate them into your Splunk
with Splunk
configuration. See "Meet Splunk apps" for more
apps
information.
The Manage users chapter shows you how to
manage settings for users.
Manage user
settings For more information about creating users, see Users
and role-based access control in the Securing Splunk
Enterprise manual.
Below are administration tasks you might want to do after initial configuration and
where to go to learn more.
2
Define alerts The Alerting Manual
Manage search jobs Manage search jobs
For more administration help, see the manuals described below.
The Installation Manual describes how to install and upgrade Splunk Enterprise.
For information on specific tasks, start here.
Getting Data In is the place to go for information about data inputs: how to
consume data from external sources and how to enhance the value of your data.
3
See how your data will look after
Preview your data
indexing
Improve the process Improve the data input process
Manage indexes and indexers
Managing Indexers and Clusters tells you how to configure indexes. It also
explains how to manage the components that maintain indexes: indexers and
clusters of indexers.
4
platform deployments
Perform capacity planning for
Estimate hardware requirements
Splunk platform deployments
Learn how to forward data Forward data
Distribute searches across multiple
Search across multiple indexers
indexers
Deploy configuration updates
Update the deployment
across your environment
Secure Splunk Enterprise
Securing Splunk tells you how to secure your Splunk Enterprise deployment.
5
References and other information
6
archive your
indexes
About clusters and
index replication
Deploy clusters
Distributed Scaling your deployment to fit the Distributed Splunk
Deployment needs of your enterprise. overview
Forwarding Data Forwarding data into Splunk. Forward data
Using search heads to distribute Search across
Distributed Search
searches across multiple indexers. multiple indexers
Using the deployment server and
Deploy updates
Updating Splunk forwarder management to update
across your
Components Splunk components such as
environment
forwarders and indexers.
User authentication
and roles
Data security and user Encryption and
Securing Splunk
authentication authentication with
SSL
Auditing
Use included dashboards and
Monitoring Splunk alerts to monitor and troubleshoot About the
Enterprise your Splunk Enterprise monitoring console
deployment
First steps
Splunk log files
Troubleshooting Solving problems
Some common
scenarios
System
requirements
Step by step
Installation Installing and upgrading Splunk installation
procedures
Upgrade from an
earlier version
The topic "Learn to administer Splunk" provides more detailed guidance on
where to go to read about specific admin tasks.
7
Other books of interest to the Splunk administrator
In addition to the manuals that describe the primary administration tasks, you
might want to visit other manuals from time to time, depending on the size of your
Splunk Enterprise installation and the scope of your responsibilities. These are
other manuals in the Splunk Enterprise documentation set:
For links to the full set of Splunk Enterprise documentation, including the
manuals listed above, visit: Splunk Enterprise documentation.
To access all the Splunk documentation, including manuals for apps, go to this
page: Welcome to Splunk documentation.
Make a PDF
If you'd like a PDF version of this manual, click the red Download the Admin
Manual as PDF link below the table of contents on the left side of this page. A
PDF version of the manual is generated on the fly. You can save it or print it to
read later.
8
Introduction for Windows admins
Welcome!
This manual has topics that will help you experiment with, learn, deploy, and get
the most out of Splunk.
Unless otherwise specified, the information in this manual is helpful for both
Windows and *nix users. If you are unfamiliar with Windows or *nix operational
commands, we strongly recommend you check out Differences between *nix and
Windows in Splunk operations.
We've also provided some extra information in the chapter "get the most out of
Splunk on Windows". This chapter is intended for Windows users to help you
make the most of Splunk and includes the following information.
Optimize Splunk for peak performance describes ways to keep your Splunk on
Windows deployment running properly, either during the course of the
deployment, or after the deployment is complete.
Put Splunk onto system images helps you make Splunk a part of every
Windows system image or installation process. From here you can find tasks for
installing Splunk and Splunk forwarders onto your system images.
• An overview of all of the installed Splunk for Windows services (from the
Installation Manual)
• What Splunk can monitor (from the Getting Data In Manual)
9
• Considerations for deciding how to monitor remote Windows data (from
the Getting Data In Manual). Read this topic for important information on
how to get data from multiple machines remotely.
• Consolidate data from multiple hosts (from the Universal Forwarder
Manual)
When you get stuck, Splunk has a large free support infrastructure that can help:
• Splunk Answers.
• The Splunk Community Wiki.
• The Splunk Internet Relay Chat (IRC) channel (EFNet #splunk). (IRC
client required)
If you still don't have an answer to your question, you can get in touch with
Splunk's support team. The Support Contact page tells you how to do that.
Note: Levels of support above the community level require an Enterprise license.
To get one, you'll need to speak with the Sales team.
10
The 500 MB limit refers to the amount of new data you can add (we call this
indexing) per day. But you can keep adding data every day, storing as much as
you want. For example, you could add 500 MB of data per day and eventually
have 10 TB of data in Splunk Free.
If you need more than 500 MB/day, you'll need to purchase an Enterprise
license. See How Splunk licensing works for more information about licensing.
Splunk Free regulates your license usage by tracking license violations. If you go
over 500 MB/day more than 3 times in a 30 day period, Splunk Free continues to
index your data, but disables search functionality until you are back down to 3 or
fewer warnings in the 30 day period.
Splunk Free is designed for personal, ad hoc search and visualization of IT data.
You can use Splunk Free for ongoing indexing of small volumes (<500 MB/day)
of data. Additionally, you can use it for short-term bulk-loading and analysis of
larger data sets--Splunk Free lets you bulk-load much larger data sets up to 3
times within a 30 day period. This can be useful for forensic review of large data
sets.
11
♦ All accesses are treated as equivalent to the admin user. There is
only one role (admin), and it is not configurable. You cannot add
more roles or create user accounts.
♦ Searches are run against all public indexes, 'index=*'.
♦ Restrictions on search, such as user quotas, maximum per-search
time ranges, and search filters, are not supported.
♦ The capability system is disabled. All available capabilities are
enabled for all users accessing Splunk Free.
When you first download and install Splunk, you are automatically using an
Enterprise Trial license. You can continue to use the Enterprise Trial license until
it expires, or switch to the Free license right away, depending on your
requirements.
Splunk Enterprise Trial gives you access to a number of features that are not
available in Splunk Free. When you switch, be aware of the following:
When you attempt to make any of the above configurations in Splunk Web while
using an Enterprise Trial license, you will be warned about the above limitations
in Splunk Free.
12
How do I switch to Splunk Free?
If you currently have Splunk Enterprise (trial or not), you can either wait for your
Enterprise license to expire, or switch to a Free license at any time. To switch to
a Free License:
1. Log in to Splunk Web as a user with admin privileges and navigate to Settings
> Licensing.
Paths
A major difference in the way that *nix operating systems handle files and
directories is the type of slash used to separate files or directories in the
pathname. *nix systems use the forward slash, ("/"). Windows, on the other hand,
uses the backslash ("\").
13
/opt/splunk/bin/splunkd
C:\Program Files\Splunk\bin\splunkd.exe
Environment variables
Configuration files
Splunk Enterprise works with configuration files that use ASCII/UTF-8 character
set encoding. When you edit configuration files on Windows, configure your text
editor to write files with this encoding. On some Windows versions, UTF-8 is not
14
the default character set encoding. See How to edit a configuration file.
All of these methods change the contents of the underlying configuration files.
You may find different methods handy in different situations.
You can perform most common configuration tasks in Splunk Web. Splunk Web
runs by default on port 8000 of the host on which it is installed:
• If you're running Splunk on your local machine, the URL to access Splunk
Web is http://localhost:8000.
• If you're running Splunk on a remote machine, the URL to access Splunk
Web is http://<hostname>:8000, where <hostname> is the name of the
machine Splunk is running on.
Administration menus can be found under Settings in the Splunk Web menu bar.
Most tasks in the Splunk documentation set are described for Splunk Web. For
more information about Splunk Web, see Meet Splunk Web.
Most of Splunk's configuration information is stored in .conf files. These files are
located under your Splunk installation directory (usually referred to in the
documentation as $SPLUNK_HOME) under /etc/system. In most cases you can
copy these files to a local directory and make changes to these files with your
preferred text editor.
Before you begin editing configuration files, read "About configuration files".
15
Use Splunk CLI
Many configuration options are available via the CLI. These options are
documented in the CLI chapter in this manual. You can also get CLI help
reference with the help command while Splunk is running:
./splunk help
For more information about the CLI, refer to "About the CLI" in this manual. If you
are unfamiliar with CLI commands, or are working in a Windows environment,
you should also check out Differences between *nix and Windows in Splunk
operations.
Developers can create setup screens for an app that allow users to set
configurations for that app without editing the configuration files directly. Setup
screens make it easier to distribute apps to different environments, or to
customize an app for a particular usage.
Setup screens use Splunk's REST API to manage the app's configuration files.
For more information about setup screens, refer to Create a setup page for a
Splunk app on the Splunk Developer Portal.
16
Get the most out of Splunk Enterprise on
Windows
When deploying Splunk on Windows on a large scale, you can rely completely on
your own deployment utilities (such as System Center Configuration Manager or
Tivoli/BigFix) to place both Splunk and its configurations on the machines in your
enterprise. Or, you can integrate Splunk into system images and then deploy
Splunk configurations and apps using Splunk's deployment server.
Concepts
When you deploy Splunk into your Windows network, it captures data from the
machines and stores it centrally. Once the data is there, you can search and
create reports and dashboards based on the indexed data. More importantly, for
system administrators, Splunk can send alerts to let you know what is happening
as the data arrives.
Considerations
First, you must inventory your enterprise, beginning at the physical network, and
leading up to how the machines on that network are individually configured. This
17
includes, but is not limited to:
Then, you must answer a number of questions prior to starting the deployment,
including:
• What data on your machines needs indexing? What part of this data
do you want to search, report, or alert across? This is probably the
most important consideration to review. The answers to these questions
determine how you address every other consideration. It determines
where to install Splunk, and what types of Splunk you use in those
installations. It also determines how much computing and network
bandwidth Splunk will potentially use.
• How is the network laid out? How are any external site links
configured? What security is present on those links? Fully
understanding your network topology helps determine which machines
you should install Splunk on, and what types of Splunk (indexers or
forwarders) you should install on those machines from a networking
standpoint.
A site with thin LAN or WAN links makes it necessary to consider how much
Splunk data should be transferred between sites. For example, if you have a
hub-and-spoke type of network, with a central site connected to branch sites, it
might be a better idea to deploy forwarders on machines in the branch sites,
which send data to an intermediate forwarder in each branch. Then, the
intermediate forwarder would send data back to the central site. This is a less
costly move than having all machines in a branch site forward their data to an
indexer in the central site.
If you have external sites that have file, print or database services, you'll need to
account for that traffic as well.
18
• How is your Active Directory (AD) configured? How are the operations
masters roles on your domain controllers (DCs) defined? Are all domain
controllers centrally located, or do you have controllers located in satellite
sites? If your AD is distributed, are your bridgehead servers configured
properly? Is your Inter-site Topology Generator (ISTG)-role server
functioning correctly? If you are running Windows Server 2008 R2, do you
have read-only domain controllers (RODCs) in your branch sites? If so,
then you have to consider the impact of AD replication traffic as well as
Splunk and other network traffic.
• What other roles are the servers in your network playing? Splunk
indexers need resources to run at peak performance, and sharing servers
with other resource-intensive applications or services (such as Microsoft
Exchange, SQL Server and even Active Directory itself) can potentially
lead to problems with Splunk on those machines. For additional
information on sharing server resources with Splunk indexers, see
"Introduction to capacity planning for Splunk Enterprise" in the Capacity
Planning Manual.
How you deploy Splunk into your existing environment depends on the needs
you have for Splunk, balanced with the available computing resources you have,
your physical and network layouts, and your corporate infrastructure. As there is
no one specific way to deploy Splunk, there are no step-by-step instructions to
follow. There are, however, some general guidelines to observe.
19
♦ Test network throughput, particularly between sites with thin
network links.
You might need to place DCs on different subnets on your network, and seize
flexible single master operations (FSMO, or operations master) roles as
necessary to ensure peak AD operation and replication performance during the
deployment.
20
greatly reduce the amount of Splunk-related traffic sent over the
wire.
• Dedicate fast disks for your Splunk indexes. The faster the available
disks on a system are for Splunk indexing, the faster Splunk will run. Use
disks with spindle speeds faster than 10,000 RPM when possible. When
dedicating redundant storage for Splunk, use hardware-based RAID 1+0
(also known as RAID 10). It offers the best balance of speed and
redundancy. Software-based RAID configurations through the Windows
Disk Management utility are not recommended.
21
• Use multiple indexes, where possible. Distribute the data that in
indexed by Splunk into different indexes. Sending all data to the default
index can cause I/O bottlenecks on your system. Where appropriate,
configure your indexes so that they point to different physical volumes on
your systems, when possible. For information on how to configure
indexes, read "Configure your indexes" in this manual.
• Don't store the hot and warm database buckets of your Splunk
indexes on network volumes. Network latency will decrease
performance significantly. Reserve fast, local disk for the hot and warm
buckets of your Splunk indexes. You can specify network shares such as
Distributed File System (DFS) volumes or Network File System (NFS)
mounts for the cold and frozen buckets of the index, but note that
searches that include data stored in the cold database buckets will be
slower.
22
• For more specific information about getting Windows data into Splunk,
review "About Windows data and Splunk" in the Getting Data In Manual.
• For information on distributed Splunk deployments, read "Distributed
overview" in the Distributed Deployment Manual. This overview is
essential reading for understanding how to set up Splunk deployments,
irrespective of the operating system that you use. You can also read about
Splunk's distributed deployment capabilities there.
• For information about planning larger Splunk deployments, read
"Introduction to capacity planning for Splunk Enterprise" in the Capacity
Planning Manual and "Deploying Splunk on Windows" in this manual.
The main reason to integrate Splunk into Windows system images is to ensure
that Splunk is available immediately when the machine is activated for use in the
enterprise. This frees you from having to install and configure Splunk after
activation.
23
In some situations, you may want to integrate a full instance of Splunk into a
system image. Where and when this is more appropriate depends on your
specific needs and resource availability.
Splunk doesn't recommend that you include a full version of Splunk in an image
for a server that performs any other type of role, unless you have specific need
for the capability that an indexer has over a forwarder. Installing multiple indexers
in an enterprise does not give you additional indexing power or speed, and can
lead to undesirable results.
• the amount of data you want Splunk to index, and where you want
Splunk to send that data, if applicable. This feeds directly into disk
space calculations, and should be a top consideration.
• the type of Splunk instance to install on the image or machine.
Universal forwarders have a significant advantage when installing on
workstations or servers that perform other duties, but might not be
appropriate in some cases.
• the available system resources on the imaged machine. How much
disk space, RAM and CPU resources are available on each imaged
system? Will it support a Splunk install?
• the resource requirements of your network. Splunk needs network
resources, whether you're using it to connect to remote machines using
WMI to collect data, or you're installing forwarders on each machine and
sending that data to an indexer.
• the system requirements of other programs installed on the image. If
Splunk is sharing resources with another server, it can take available
resources from those other programs. Consider whether or not you should
install other programs on a workstation or server that is running a full
instance of Splunk. A universal forwarder will work better in cases like this,
as it is designed to be lightweight.
• the role that the imaged machine plays in your environment. Will it be
a workstation only running productivity applications like Office? Or will it be
an operations master domain controller for your Active Directory forest?
Once you have determined the answers to the questions in the checklist above,
the next step is to integrate Splunk into your system images. The steps listed are
generic, allowing you to use your favorite system imaging or configuration tool to
complete the task.
24
Choose one of the following options for system integration:
1. On a reference computer, install and configure Windows the way that you
want, including installing Windows features, service packs, and other
components.
2. Install and configure necessary applications, taking into account Splunk's
system and hardware capacity requirements.
3. Install and configure the universal forwarder from the command line. You
must supply at least the LAUNCHSPLUNK=0 command line flag when you
perform the installation.
4. Proceed through the graphical portion of the install, selecting the inputs,
deployment servers, and/or forwarder destinations you want.
5. After the installation has completed, open a command prompt or
PowerShell window.
1. (Optional) Edit configuration files that were not configurable in the installer.
2. Change to the universal forwarder bin directory.
3. Run ./splunk clone-prep-clear-config.
4. Exit the command prompt or PowerShell window.
5. In the Services Control Panel, configure the splunkd service to start
automatically by setting its startup type to 'Automatic'.
6. Prepare the system image for domain participation using a utility such as
Windows System Image Manager (WSIM). Microsoft recommends using
SYSPREP or WSIM as the method to change machine Security Identifiers
(SIDs) prior to cloning, as opposed to using third-party tools (such as
Ghost Walker or NTSID.)
25
Clone and restore the image
1. Restart the machine and clone it with your favorite imaging utility.
2. After cloning the image, use the imaging utility to restore it into another
physical or virtual machine.
3. Run the cloned image. Splunk services start automatically.
4. Use the CLI to restart Splunk Enterprise to remove the cloneprep
information:
splunk restart
You must restart Splunk Enterprise from the CLI to delete the cloneprep
file. Restarting the Splunk service does not perform the deletion.
5. Confirm that the $SPLUNK_HOME\cloneprep file has been deleted.
2. Install and configure any necessary applications, taking into account Splunk's
system and hardware capacity requirements.
Important: You can install using the GUI installer, but more options are available
when installing the package from the command line.
26
6. Clean any event data by issuing a .\splunk clean eventdata.
8. Ensure that the splunkd and splunkweb services are set to start automatically
by setting their startup type to 'Automatic' in the Services Control Panel.
9. Prepare the system image for domain participation using a utility such as
SYSPREP (for Windows XP and Windows Server 2003/2003 R2) and/or
Windows System Image Manager (WSIM) (for Windows Vista, Windows 7, and
Windows Server 2008/2008 R2).
10. Once you have configured the system for imaging, reboot the machine and
clone it with your favorite imaging utility.
27
Administer Splunk Enterprise with Splunk
Web
http://mysplunkhost:<port>
The first time you log in to Splunk with an Enterprise license, login as the
administrator you created at installation time.:
Username - admin
Password - <password>
Note: Splunk with a free license does not have access controls, so you will not
be prompted for login information.
Note: Starting in Splunk version 4.1.4, you cannot access Splunk Free from a
remote browser until you have edited $SPLUNK_HOME/etc/local/server.conf and
set allowRemoteLogin to Always. If you are running Splunk Enterprise, remote
login is disabled by default (set to requireSetPassword) for the admin user until
you change the default password.
28
• Configure your data inputs
• Search data and report and visualize results
• Investigate problems
• Manage users natively or via LDAP strategies
• Troubleshoot Splunk deployments
• Manage clusters and peers
Refer to the system requirements for a list of supported operating systems and
browsers.
• Data Inputs Lets you view a list of data types and configure them. To add
an input, click the Add data button in the Data Inputs page. For more
information about how to add data, see the Getting Data In manual.
• Forwarding and receiving lets you set up your forwarders and receivers.
For more information about setting up forwarding and receiving, see the
Forwarding Data manual.
• Indexes lets you add, disable, and enable indexes.
• Report acceleration summaries takes you to the searching and
reporting app to lets you review your existing report summaries. For more
information about creating report summaries, see the Knowledge Manager
Manual.
By navigating to Settings > Users and Authentication > Access Control you
can do the following:
29
For more information about working with users and authentication, see Securing
Splunk Enterprise.
From this page, you can select an app from a list of those you have already
installed and are currently available to you. From here you can also access the
following menu options:
• Find more Apps lets you search for and install additional apps.
• Manage Apps lets you manage your existing apps.
You can also access all of your apps in the Home page.
For more information about apps, see Developing views and apps for Splunk
Web.
The options under Settings > System let you do the following:
• Server settings lets you manage Splunk platform settings like ports, host
name, index paths, email server, and system logging and deployment
client information. For more about configuring and managing distributed
environments with Splunk Web, see the Updating Splunk Components
manual.
• Server controls lets you restart the Splunk platform.
• Licensing lets you manage and renew your Splunk licenses.
When you add an input to Splunk, that input gets added relative to the app you're
in. Some apps, like the *nix and Windows apps, write input data to a specific
index (in the case of *nix and Windows, that is the os index). If you review the
summary dashboard and you don't see data that you're certain is in Splunk, be
sure that you're looking at the right index.
30
You may want to add the index that an app uses to the list of default indexes for
the role you're using. For more information about roles, refer to this topic about
roles in Securing Splunk.For more information about Summary Dashboards, see
the Search Tutorial.
• You can add and edit the text of custom notifications that display in the
Messages menu.
• You can set the audience for certain error or warning messages generated
by Splunk Enterprise.
You can add a custom message to Splunk Web, for example to notify your users
of scheduled maintenance. You need admin or system user level privileges to
add or edit a custom notification.
For some messages that appear in Splunk Web, you can control which users see
the message.
If by default a message displays only for users with a particular capability, such
as admin_all_objects, you can display the message to more of your users,
without granting them the admin_all_objects capability. Or you can have fewer
users see a message.
31
The message you configure must exist in messages.conf. You can set the
audience for a message by role or by capability, by modifying settings in
messages.conf.
The message you restrict must exist in messages.conf. Not all messages reside
in messages.conf. If a message contains a Learn more link it resides in
messages.conf and is configurable. If a message does not contain a Learn more
link, it might or might not reside in messages.conf and be configurable.
For example, the message in the following image contains a Learn more link:
Once you have chosen a message that you want to configure, check whether it is
configurable. Search for parts of the message string in
$SPLUNK_HOME/etc/system/default/messages.conf on *nix or
%SPLUNK_HOME%\etc\system\default\messages.conf on Windows. The message
string is a setting within a stanza. The stanza name is a message identifier. Make
note of the stanza name to use in your customized copy of messages.conf. Never
edit the configuration files that are in the default directory.
For example, searching the default messages.conf for text from the sample
message shown above, such as "artifacts," leads you to the following stanza:
[DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU]
message = The number of search artifacts in the dispatch directory
is higher than recommended (count=%lu, warning threshold=%lu) and could
have an impact on search performance.
action = Remove excess search artifacts using the "splunk
clean-dispatch" CLI command, and review artifact retention policies in
limits.conf and savedsearches.conf. You can also raise this warning
threshold in limits.conf / dispatch_dir_warning_size.
severity = warn
capabilities = admin_all_objects
help = message.dispatch.artifacts
The stanza name for this message is DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU.
32
About editing messages.conf
A best practice for modifying messages.conf is to use a custom app. Deploy the
app containing the message modifications to every instance in your deployment.
Never edit the configuration files that are in the default directory.
For example,
[DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU]
capabilities = admin_all_objects, can_delete
For a list of capabilities and their definitions, see About defining roles with
capabilities in Securing Splunk Enterprise.
If a role attribute is set for the message, that attribute takes precedence over the
capabilities attribute. The capabilities attribute for the message is ignored.
See messages.conf.spec.
Set the roles required to view a message by editing the roles attribute in the
messages.conf stanza for the message. If a user belongs to any of these roles,
the message is visible to them.
If a role attribute is set for the message, that attribute takes precedence over the
capabilities attribute. The capabilities attribute for the message is ignored.
For example:
[DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU]
roles = admin
33
Administer Splunk Enterprise with
configuration files
• System settings
• Authentication and authorization information
• Index mappings and setting
• Deployment and cluster configurations
• Knowledge objects and saved searches
For a list of configuration files and an overview of the area each file covers, see
List of configuration files in this manual.
Most configuration files come packaged with your Splunk software in the
$SPLUNK_HOME/etc/system/default/ directory.
When you change your configuration in Splunk Web, that change is written to a
copy of the configuration file for that setting. Splunk software creates a copy of
this configuration file (if it does not exist), writes the change to that copy, and
adds it to a directory under $SPLUNK_HOME/etc/.... The directory that the new file
is added to depends on a number of factors that are discussed in Configuration
file directories in this manual. The most common directory is
$SPLUNK_HOME/etc/system/local, which is used in the example.
If you add a new index in Splunk Web, the software performs the following
actions:
34
4. Leaves the default file unchanged in $SPLUNK_HOME/etc/system/default.
While you can perform a lot of configuration with Splunk Web or CLI commands,
you can also edit the configuration files directly. Some advanced configurations
are not exposed in Splunk Web or the CLI and can only be changed by editing
the configuration files directly.
Important: Never change, copy, or move the configuration files that are in the
default directory. Default files must remain intact and in their original location. To
change settings for a particular configuration file, you must first create a new
version of the file in a non-default directory and then add the settings that you
want to change. When you first create this new version of the file, start with an
empty file. Do not start from a copy of the file in the default directory. For
information on the directories where you can manually change configuration files,
see Configuration file directories.
• Learn about how the default configuration files work, and where to put the
files that you edit. See Configuration file directories.
• Learn about the structure of the stanzas that comprise configuration files
and how the attributes you want to edit are set up. See Configuration file
structure.
• Learn how different versions of the same configuration files in different
directories are layered and combined so that you know the best place to
put your file. See Configuration file precedence.
• Consult the product documentation, including the .spec and .example files
for the configuration file. These documentation files reside in the file
system in $SPLUNK_HOME/etc/system/README, as well as in the last chapter
of this manual.
After you are familiar with the configuration file content and directory structure,
and understand how to leverage Splunk Enterprise configuration file precedence,
see How to edit a configuration file to learn how to safely change your files.
35
in your default, local, and app directories. This creates a layering effect that
allows Splunk to determine configuration priorities based on factors such as the
current user and the current app.
Note: The most accurate list of settings available for a given configuration file is
in the .spec file for that configuration file. You can find the latest version of the
.spec and .example files in the "Configuration file reference", or in
$SPLUNK_HOME/etc/system/README.
"all these worlds are yours, except /default - attempt no editing there"
-- duckfez, 2010
Important: Never change or copy the configuration files in the default directory.
Default files must remain intact and in their original location. The Splunk
Enterprise upgrade process overwrites the default directory, so any changes that
you make in the default directory are lost on upgrade. Changes that you make in
non-default configuration directories, such as $SPLUNK_HOME/etc/system/local or
$SPLUNK_HOME/etc/apps/<app_name>/local, persist through upgrades.
To change attribute values for a particular configuration file, you must first create
a new version of the file in a non-default directory and then modify the values
there. Values in a non-default directory have precedence over values in the
default directory.
When you first create this new version of the file, start with an empty file and add
only the attributes that you need to change. Do not start from a copy of the
default directory. If you copy the entire default file to a location with higher
precedence, any changes to the default values that occur through future Splunk
Enterprise upgrades cannot take effect, because the values in the copied file will
override the updated values in the default file.
36
Where you can place (or find) your modified configuration files
You can layer several versions of a configuration file, with different attribute
values used by Splunk according to the layering scheme described in
"Configuration file precedence".
Never edit configuration files in their default directories. Instead, create and edit
your files in one of the configuration directories, such as
$SPLUNK_HOME/etc/system/local. These directories are not overwritten during
upgrades.
$SPLUNK_HOME/etc/system/local
Local changes on a site-wide basis go here; for example, settings you
want to make available to all apps. If the configuration file you're looking
for doesn't already exist in this directory, create it and give it write
permissions.
$SPLUNK_HOME/etc/slave-apps/[_cluster|<app_name>]/[local|default]
For cluster peer nodes only.
The _cluster directory contains configuration files that are not part of real
apps but that still need to be identical across all peers. A typical example
is the indexes.conf file.
$SPLUNK_HOME/etc/apps/<app_name>/[local|default]
If you're in an app when a configuration change is made, the setting goes
into a configuration file in the app's /local directory. For example, edits for
37
search-time settings in the Search app go here:
$SPLUNK_HOME/etc/apps/search/local/.
If you want to edit a configuration file so that the change only applies to a
certain app, copy the file to the app's /local directory (with write
permissions) and make your changes there.
$SPLUNK_HOME/etc/users
User-specific configuration changes go here.
$SPLUNK_HOME/etc/system/README
This directory contains supporting reference documentation. For most
configuration files, there are two reference files: .spec and .example; for
example, inputs.conf.spec and inputs.conf.example. The .spec file
specifies the syntax, including a list of available attributes and variables.
The .example file contains examples of real-world usage.
Stanzas
For example, inputs.conf provides an [SSL] that includes settings for the server
certificate and password (among other things):
[SSL]
serverCert = <pathname>
password = <password>
Depending on the stanza type, some of the attributes might be required, while
others could be optional.
When you edit a configuration file, you might be changing the default stanza, like
above, or you might need to add a brand-new stanza.
38
Here's the basic pattern:
[stanza1_header]
<attribute1> = <val1>
# comment
<attribute2> = <val2>
...
[stanza2_header]
<attribute1> = <val1>
<attribute2> = <val2>
...
Important: Attributes are case-sensitive. For example, sourcetype = my_app is
not the same as SOURCETYPE = my_app. One will work; the other won't.
Stanza scope
Configuration files frequently have stanzas with varying scopes, with the more
specific stanzas taking precedence. For example, consider this example of an
outputs.conf configuration file, used to configure forwarders:
[tcpout]
indexAndForward=true
compressed=true
[tcpout:my_indexersA]
compressed=false
server=mysplunk_indexer1:9997, mysplunk_indexer2:9997
[tcpout:my_indexersB]
server=mysplunk_indexer3:9997, mysplunk_indexer4:9997
• The global [tcpout], with settings that affect all tcp forwarding.
• Two [tcpout:<target_list>] stanzas, whose settings affect only the
indexers defined in each target group.
39
Configuration file precedence
For more information about configuration files, read About configuration files.
Splunk software uses configuration files to determine nearly every aspect of its
behavior. A Splunk platform deployment can have many copies of the same
configuration file. These file copies are usually layered in directories that affect
either the users, an app, or the system as a whole.
• It merges the settings from all copies of the file, using a location-based
prioritization scheme.
• When different copies have conflicting attribute values (that is, when they
set the same attribute to different values), it uses the value from the file
with the highest priority.
40
Splunk software uses two main schemes of directory precedence.
• App or user: Some activities, like searching, take place in an app or user
context. The app and user context is vital to search-time processing,
where certain knowledge objects or actions might be valid only for specific
users in specific apps.
• Global: Activities like indexing take place in a global context. They are
independent of any app or user. For example, configuration files that
determine monitoring behavior occur outside of the app and user context
and are global in nature.
There's also an expanded precedence order for cluster peer node global
configurations. This is because some configuration files, like indexes.conf, must
be identical across peer nodes.
To keep them consistent, files are managed from the cluster master, which
distributes them to the peer nodes so that all peer nodes contain the same
versions of the files. These files have the highest precedence in a cluster peer's
configuration, which is explained in the next section.
For more information about how configurations are distributed across peer
nodes, see "Update common peer configurations" in the Managing Indexers and
Clusters manual.
When the context is global (that is, where there's no app/user context), directory
priority descends in this order:
41
When consuming a global configuration, such as inputs.conf, Splunk first uses
the attributes from any copy of the file in system/local. Then it looks for any
copies of the file located in the app directories, adding any attributes found in
them, but ignoring attributes already discovered in system/local. As a last resort,
for any attributes not explicitly assigned at either the system or app level, it
assigns default values from the file in the system/default directory.
Note: As the next section describes, cluster peer nodes have an expanded order
of precedence.
For cluster peer nodes, the global context considers some additional
peer-specific ("slave-app") directories. These directories contain apps and
configurations that are identical across all peer nodes. Here is the expanded
precedence order for cluster peers:
With cluster peers, custom settings common to all the peers (those in the
slave-app local directories) have the highest precedence.
When there's an app/user context, directory priority descends from user to app to
system:
42
How app directory names affect precedence
Note: For most practical purposes, the information in this subsection probably
won't matter, but it might prove useful if you need to force a certain order of
evaluation or for troubleshooting.
$SPLUNK_HOME/etc/apps/myapp1
$SPLUNK_HOME/etc/apps/myapp10
$SPLUNK_HOME/etc/apps/myapp2
$SPLUNK_HOME/etc/apps/myapp20
...
$SPLUNK_HOME/etc/apps/myappApple
$SPLUNK_HOME/etc/apps/myappBanana
$SPLUNK_HOME/etc/apps/myappZabaglione
...
$SPLUNK_HOME/etc/apps/myappapple
$SPLUNK_HOME/etc/apps/myappbanana
$SPLUNK_HOME/etc/apps/myappzabaglione
...
Lexicographical order sorts items based on the values used to encode the items
in computer memory. In Splunk software, this is almost always UTF-8 encoding,
which is a superset of ASCII.
• Numbers are sorted before letters. Numbers are sorted based on the first
digit. For example, the numbers 10, 9, 70, 100 are sorted lexicographically
as 10, 100, 70, 9.
• Uppercase letters are sorted before lowercase letters.
• Symbols are not standard. Some symbols are sorted before numeric
values. Other symbols are sorted before or after letters.
Note: When determining precedence in the app/user context, directories for the
currently running app take priority over those for all other apps, independent of
how they're named. Furthermore, other apps are only examined for exported
43
settings.
Putting this all together, the order of directory priority, from highest to lowest,
goes like this:
Global context:
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/system/default/*
$SPLUNK_HOME/etc/slave-apps/A/local/* ...
$SPLUNK_HOME/etc/slave-apps/z/local/*
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/slave-apps/A/default/* ...
$SPLUNK_HOME/etc/slave-apps/z/default/*
$SPLUNK_HOME/etc/system/default/*
App/user context:
$SPLUNK_HOME/etc/users/*
$SPLUNK_HOME/etc/apps/Current_running_app/local/*
44
$SPLUNK_HOME/etc/apps/Current_running_app/default/*
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/system/default/*
Important: In the app/user context, all configuration files for the currently running
app take priority over files from all other apps. This is true for the app's local and
default directories. So, if the current context is app C, Splunk evaluates both
$SPLUNK_HOME/etc/apps/C/local/* and $SPLUNK_HOME/etc/apps/C/default/*
before evaluating the local or default directories for any other apps. Furthermore,
Splunk software only looks at configuration data for other apps if that data has
been exported globally through the app's default.meta file. For more information,
see Set permissions for objects in a Splunk app on the Splunk Developer Portal.
Also, note that /etc/users/ is evaluated only when the particular user logs in or
performs a search.
[source::/opt/Locke/Logs/error*]
sourcetype = fatal-error
and $SPLUNK_HOME/etc/apps/t2rss/local/props.conf contains another version
of the same stanza:
[source::/opt/Locke/Logs/error*]
sourcetype = t2rss-error
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE_DATE = True
The line merging attribute assignments in t2rss always apply, as they only occur
in that version of the file. However, there's a conflict with the sourcetype attribute.
In the /system/local version, the sourcetype has a value of "fatal-error". In the
/apps/t2rss/local version, it has a value of "t2rss-error".
45
Since this is a sourcetype assignment, which gets applied at index time, Splunk
uses the global context for determining directory precedence. In the global
context, Splunk gives highest priority to attribute assignments in system/local.
Thus, the sourcetype attribute gets assigned a value of "fatal-error".
The final, internally merged version of the file looks like this:
[source::/opt/Locke/Logs/error*]
sourcetype = fatal-error
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE_DATE = True
List of configuration files and their context
admon.conf
authentication.conf
authorize.conf
crawl.conf
deploymentclient.conf
distsearch.conf
indexes.conf
inputs.conf
limits.conf, except for indexed_realtime_use_by_default and
indexed_realtime_use_by_default
outputs.conf
pdf_server.conf
procmonfilters.conf
props.conf -- global and app/user context
pubsub.conf
regmonfilters.conf
report_server.conf
restmap.conf
searchbnf.conf
segmenters.conf
46
server.conf
serverclass.conf
serverclass.seed.xml.conf
source-classifier.conf
sourcetypes.conf
sysmon.conf
tenants.conf
transforms.conf -- global and app/user context
user-seed.conf -- special case: Must be located in /system/default
web.conf
wmi.conf
alert_actions.conf
app.conf
audit.conf
commands.conf
eventdiscoverer.conf
event_renderers.conf
eventtypes.conf
fields.conf
literals.conf
macros.conf
multikv.conf
props.conf -- global and app/user context
savedsearches.conf
tags.conf
times.conf
transactiontypes.conf
transforms.conf -- global and app/user context
user-prefs.conf
workflow_actions.conf
47
Attribute precedence within a single props.conf file
In addition to understanding how attribute precedence works across files, you
also sometimes need to consider attribute priority within a single props.conf file.
When two or more stanzas specify a behavior that affects the same item, items
are evaluated by the stanzas' ASCII order. For example, assume you specify in
props.conf the following stanzas:
[source::.../bar/baz]
attr = val1
[source::.../bar/*]
attr = val2
The second stanza's value for attr will be used, because its path is higher in the
ASCII order and takes precedence.
There's a way to override the default ASCII priority in props.conf. Use the
priority key to specify a higher or lower priority for a given stanza.
source::az
[source::...a...]
sourcetype = a
[source::...z...]
sourcetype = z
In this case, the default behavior is that the settings provided by the pattern
"source::...a..." take precedence over those provided by "source::...z...". Thus,
sourcetype will have the value "a".
48
To override this default ASCII ordering, use the priority key:
[source::...a...]
sourcetype = a
priority = 5
[source::...z...]
sourcetype = z
priority = 10
Assigning a higher priority to the second stanza causes sourcetype to have the
value "z".
You can use the priority key to resolve collisions between patterns of the same
type, such as sourcetype patterns or host patterns. The priority key does not,
however, affect precedence across spec types. For example, source patterns
take priority over host and sourcetype patterns, regardless of priority key values.
The props.conf file sets attributes for processing individual events by host,
source, or sourcetype (and sometimes event type). So it's possible for one event
to have the same attribute set differently for the default fields: host, source or
sourcetype. The precedence order is:
• source
• host
• sourcetype
You might want to override the default props.conf settings. For example,
assume you are tailing mylogfile.xml, which by default is labeled sourcetype =
49
xml_file. This configuration will re-index the entire file whenever it changes,
even if you manually specify another sourcetype, because the property is set by
source. To override this, add the explicit configuration by source:
[source::/var/log/mylogfile.xml]
CHECK_METHOD = endpoint_md5
Before you edit a configuration file, make sure you are familiar with the following:
• To learn about where configuration files live, and where to put the ones
you edit, see Configuration file directories.
• To learn about file structure and how the attributes you want to edit are set
up, see Configuration file structure.
• To learn how configuration files across multiple directories are layered and
combined, see Configuration file precedence.
To customize an attribute in a configuration file, create a new file with the same
name in a local or app directory. You will then add the specific attributes that you
want to customize to the local configuration file.
50
Clear an attribute
forwardedindex.0.whitelist =
This overrides any previous value that the attribute held, including any value set
in its default file, causing the system to consider the value entirely unset.
Insert a comment
You can insert comments in configuration files. To do so, use the # sign:
Important: Start the comment at the left margin. Do not put the comment on the
same line as the stanza or attribute:
This sets the a_setting attribute to the value "5 #5 is the best number", which
may cause unexpected results.
The Splunk platform works with configuration files with ASCII/UTF-8 encoding.
On operating systems where UTF-8 is not the default character set, for example
Windows, configure your text editor to write files in that format.
51
When to restart Splunk Enterprise after a
configuration file change
When you make changes to Splunk Enterprise using the configuration files, you
might need to restart Splunk Enterprise for the changes to take effect.
Note: Changes made in Splunk Web are less likely to require restarts. This is
because Splunk Web automatically updates the underlying configuration file(s)
and notifies the running Splunk instance (splunkd) of the changes.
This topic provides guidelines to help you determine whether to restart after a
change. Whether a change requires a restart depends on a number of factors,
and this topic does not provide a definitive authority. Always check the
configuration file or its reference topic to see whether a particular change
requires a restart. For a full list of configuration files and an overview of the area
each file covers, see List of configuration files in this manual.
If you make a configuration file change to a heavy forwarder, you must restart the
forwarder, but you do not need to restart the receiving indexer. If the changes are
part of a deployed app already configured to restart after changes, then the
forwarder restarts automatically.
You must restart splunkweb to enable or disable SSL for Splunk Web access.
As a general rule, restart splunkd after making the following types of changes.
Indexer changes
52
in Managing Indexers and Clusters of Indexers.
Note: When settings that affect indexing are changed through Splunk Web and
the CLI, they do not require restarts and take place immediately.
Any user and role changes made in configuration files require a restart, including:
• LDAP configurations (If you make these changes in Splunk Web you can
reload the changes without restarting.)
• Password changes
• Changes to role capabilities
• Splunk Enterprise native authentication changes, such as user-to-role
mappings.
System changes
Changes that affect the system settings or server state require restart, such as:
• Licensing changes
• Web server configuration updates
• Changes to general indexer settings (minimum free disk space, default
server name, etc.)
• Changes to General settings (e.g., port settings).
• Changing a forwarder's output settings
• Changing the time zone in the OS of a Splunk Enterprise instance (Splunk
Enterprise retrieves its local time zone from the underlying OS at startup)
• Creating a search head cluster.
• Installing some apps may require a restart. Consult the documentation for
each app you are installing.
Settings that apply to search-time processing take effect immediately and do not
require a restart. This is because searches run in a separate process that reloads
configurations. For example, lookup tables, tags, and event types are re-read for
each search.
• Lookup tables
• Field extractions
53
• Knowledge objects
• Tags
• Event types
Files that contain search-time operations include (but are not limited to):
• macros.conf
• props.conf
• transforms.conf
• savedsearches.conf (If a change creates an endpoint you must restart.)
http://<yoursplunkserver>:8000/en-US/debug/refresh
In addition, index-time props and transforms do not require restarts, as long as
your indexers are receiving the data from forwarders. That is to say:
To reload transforms.conf:
http://<yoursplunkserver>:8000/en-US/debug/refresh?entity=admin/transforms-lookup
for new lookup file definitions that reside within transforms.conf
http://<yoursplunkserver>:8000/en-US/debug/refresh?entity=admin/transforms-extract
for new field transforms/extractions that reside within transforms.conf
To reload authentication.conf, use Splunk Web. Go to Settings > Access
controls > Authentication method and click Reload authentication
configuration. This refreshes the authentication caches, but does not
disconnect current users.
54
Restart an indexer cluster
To learn about restarts in an indexer cluster, and when and how to use a rolling
restart, see Restart the entire indexer cluster or a single peer node in Managing
Indexers and Clusters of Indexers.
Use cases
• line breaking
• timestamp parsing
Search-time settings relate mainly to field extraction and creation and do not
require a restart. Any index-time changes still require a restart. For example:
2. If the search-time changes are on a heavy forwarder, you must restart that
forwarder. (If the changes are part of a deployed app configured to restart after
changes, then this happens automatically.)
Scenario: You edit savedsearches.conf and the new search creates a REST
endpoint
55
The following is a list of some of the available spec and example files associated
with each conf file. Some conf files do not have spec or example files; contact
Support before editing a conf file that does not have an accompanying spec or
example file.
File Purpose
alert_actions.conf Create an alert.
app.conf Configure app properties
Configure auditing and event hashing. This feature is
audit.conf
not available for this release.
Toggle between Splunk's built-in authentication or
authentication.conf
LDAP, and configure LDAP.
authorize.conf Configure roles, including granular access controls.
checklist.conf Customize monitoring console health check.
collections.conf Configure KV Store collections for apps.
Connect search commands to any custom search
commands.conf
script.
datamodels.conf Attribute/value pairs for configuring data models.
default.meta.conf Set permissions for objects in a Splunk app.
Specify behavior for clients of the deployment
deploymentclient.conf
server.
distsearch.conf Specify behavior for distributed search.
event_renderers.conf Configure event-rendering properties.
eventtypes.conf Create event type definitions.
Create multivalue fields and add search capability
fields.conf
for indexed fields.
indexes.conf Manage and configure index settings.
inputs.conf Set up data inputs.
Designate and manage settings for specific
instances of Splunk. This can be handy, for
instance.cfg.conf
example, when identifying forwarders for internal
searches.
56
Set various limits (such as maximum result size or
limits.conf concurrent real-time searches) for search
commands.
Customize the text, such as search error strings,
literals.conf
displayed in Splunk Web.
macros.conf Define search macros in Settings.
Configure extraction rules for table-like events (ps,
multikv.conf
netstat, ls).
outputs.conf Set up forwarding behavior.
passwords.conf Maintain the credential information for an app.
procmon-filters.conf Monitor Windows process data.
Set indexing property configurations, including
timezone offset, custom source type rules, and
props.conf
pattern collision priorities. Also, map transforms to
event properties.
pubsub.conf Define a custom client of the deployment server.
restmap.conf Create custom REST endpoints.
Define ordinary reports, scheduled reports, and
savedsearches.conf
alerts.
searchbnf.conf Configure the search assistant.
segmenters.conf Configure segmentation.
Contains a wide variety of settings for configuring
the overall state of a Splunk Enterprise instance. For
example, the file includes settings for enabling SSL,
server.conf
configuring nodes of an indexer cluster or a search
head cluster, configuring KV store, and setting up a
license master.
Define deployment server classes for use with
serverclass.conf
deployment server.
Configure how to seed a deployment client with apps
serverclass.seed.xml.conf
at start-up time.
Terms to ignore (such as sensitive data) when
source-classifier.conf
creating a source type.
Machine-generated file that stores source type
sourcetypes.conf
learning rules.
57
tags.conf Configure tags for fields.
Enable apps to collect telemetry data about app
telemetry.conf
usage and other properties.
Define custom time ranges for use in the Search
times.conf
app.
Add additional transaction types for transaction
transactiontypes.conf
search.
Configure regex transformations to perform on data
transforms.conf
inputs. Use in tandem with props.conf.
Change UI preferences for a view. Includes
ui-prefs.conf changing the default earliest and latest values for the
time range picker.
user-seed.conf Set a default user and password.
List the visualizations that an app makes available to
visualizations.conf
the system.
viewstates.conf Use this file to set up UI views (such as charts).
web.conf Configure Splunk Web, enable HTTPS.
Set up Windows management instrumentation
wmi.conf
(WMI) inputs.
workflow_actions.conf Configure workflow actions.
Configure workload rules to define access and
workload_rules.conf
priority for workload pools in workload management.
Configure workload pools (compute and memory
workload_pools.conf resource groups) that you can assign to searches in
workload management.
• Input
• Parsing
• Indexing
• Search
58
Each phase of the data pipeline relies on different configuration file parameters.
Knowing which phase uses a particular parameter allows you to identify where in
your Splunk deployment topology you need to set the parameter.
The Distributed Deployment manual describes the data pipeline in detail, in "How
data moves through Splunk: the data pipeline".
59
One or more Splunk Enterprise components can perform each of the pipeline
phases. For example, a universal forwarder, a heavy forwarder, or an indexer
can perform the input phase.
Data only goes through each phase once, so each configuration belongs on only
one component, specifically, the first component in the deployment that handles
that phase. For example, say you have data entering the system through a set of
universal forwarders, which forward the data to an intermediate heavy forwarder,
which then forwards the data onwards to an indexer. In that case, the input
phase for that data occurs on the universal forwarders, and the parsing phase
occurs on the heavy forwarder.
Data pipeline
Components that can perform this role
phase
indexer
Input universal forwarder
heavy forwarder
indexer
heavy forwarder
Parsing
light/universal forwarder (in conjunction with the
INDEXED_EXTRACTIONS attribute only)
Indexing indexer
indexer
Search
search head
Where to set a configuration parameter depends on the components in your
specific deployment. For example, you set parsing parameters on the indexers in
most cases. But if you have heavy forwarders feeding data to the indexers, you
instead set parsing parameters on the heavy forwarders. Similarly, you set
search parameters on the search heads, if any. But if you aren't deploying
dedicated search heads, you set the search parameters on the indexers.
For more information, see "Components and the data pipeline" in the Distributed
Deployment Manual.
60
For example, if you are using universal forwarders to consume inputs, you need
to configure inputs.conf parameters on the forwarders. If, however, your indexer
is directly consuming network inputs, you need to configure those
network-related inputs.conf parameters on the indexer.
The following items in the phases below are listed in the order Splunk applies
them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
• inputs.conf
• props.conf
♦ CHARSET
♦ NO_BINARY_CHECK
♦ CHECK_METHOD
♦ CHECK_FOR_HEADER (deprecated)
♦ PREFIX_SOURCETYPE
♦ sourcetype
• wmi.conf
• regmon-filters.conf
• props.conf
♦ INDEXED_EXTRACTIONS, and all other structured data header
extractions
Parsing phase
• props.conf
♦ LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE,
BREAK_ONLY_BEFORE_DATE, and all other line merging settings
♦ TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ,
and all other time extraction settings and rules
♦ TRANSFORMS which includes per-event queue filtering, per-event
index assignment, per-event routing
♦ SEDCMD
♦ MORE_THAN, LESS_THAN
• transforms.conf
♦ stanzas referenced by a TRANSFORMS clause in props.conf
♦ LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
61
Indexing phase
• props.conf
♦ SEGMENTATION
• indexes.conf
• segmenters.conf
Search phase
• props.conf
♦ EXTRACT
♦ REPORT
♦ LOOKUP
♦ KV_MODE
♦ FIELDALIAS
♦ EVAL
♦ rename
• transforms.conf
♦ stanzas referenced by a REPORT clause in props.conf
♦ filename, external_cmd, and all other lookup-related settings
♦ FIELDS, DELIMS
♦ MV_ADD
• lookup files in the lookups folders
• search and lookup scripts in the bin folders
• search commands and lookup scripts
• savedsearches.conf
• eventtypes.conf
• tags.conf
• commands.conf
• alert_actions.conf
• macros.conf
• fields.conf
• transactiontypes.conf
• multikv.conf
There are some settings that don't work well in a distributed Splunk environment.
These tend to be exceptional and include:
• props.conf
♦ CHECK_FOR_HEADER (deprecated), LEARN_MODEL, maxDist. These are
created in the parsing phase, but they require generated
configurations to be moved to the search phase configuration
62
location.
Copy this directory to a new Splunk instance to restore. You don't have to stop
Splunk to do this.
For more information about configuration files, read "About configuration files".
If you're using index replication, you can back up the master node's static
configuration. This is of particular use when configuring a stand-by master that
can take over if the primary master fails. For details, see "Configure the master"
in the Managing Indexers and Clusters manual.
File validation can identify when the contents of the files of a Splunk software
instance have been modified in a way that is not valid. You can run this check
manually, and it also runs automatically on startup. If you are an admin, you can
view the results in a Monitoring Console health check or in a dashboard from any
node.
63
Run the check manually
You might want to run the integrity check manually under any of the following
conditions:
To run the check manually with default settings, from the installation directory,
type ./splunk validate files. You can manually run the integrity check with
two controls.
• You can specify the file describing the correct file contents with -manifest.
You might want to do this to check against an old manifest from a prior
installation after a botched upgrade, to validate that the files are simply
stale. You can use any valid manifest file. A manifest file ships in the
installation directory with a new Splunk Enterprise download.
• You can constrain the test to only files that end with .conf by using -type
conf. This is the set of messages the startup-time check prints to the
terminal.
First, as part of the pre-flight check before splunkd starts, the check quickly
validates only the default conf files and writes a message to your terminal.
Next, after splunkd starts, the check validates all files shipped with Splunk
Enterprise (default conf files, libraries, binaries, data files, and so on). This more
complete check writes the results to splunkd.log as well as to the bulletin
message system in Splunk Web. You can configure it in limits.conf.
Options for the second part of the check in limits.conf include the following:
64
See limits.conf.spec.
Reading all the files provided with the installation has a moderate effect on I/O
performance. If you need to restart Splunk software several times in a row, you
might wish to disable this check temporarily to improve I/O performance.
Files are validated against the manifest file in the installation directory. If this file
is removed or altered, the check cannot work correctly.
If you are an admin, you can view the results in a Monitoring Console health
check or in a dashboard from any node. See Access and customize health check
for more information about the Monitoring Console health check.
If an integrity check returns an error, such as "File Integrity checks found files
that did not match the system-provided manifest", here are some tips to get you
started resolving the problem.
65
• If it cannot read some files, Splunk software may have been run as two or
more different users or security contexts. Files created at install time
under one user or context might not be readable by the service now
running as another context. Alternatively, you might have legitimately
modified the access rules to these files, but this is far less common.
• If the integrity check reports that it cannot read or comprehend the
manifest, the manifest might be simply missing from $SPLUNK_HOME, or you
have access problems to it, or the file may be corrupted. You might want
to evaluate whether all the files from the installation package made it to
the installation directory, and that the manifest contents are the same as
the ones from the package. The manifest is not required for Splunk
software to function, but the integrity check cannot function without it.
• If the integrity check reports all or nearly all files are incorrect, splunkd and
etc/splunk.version might be in disagreement with the rest of the
installation. Try to determine how this could have happened. It might be
that the majority of the files are the ones you intended to be present.
• If the pattern is not described above, you might need to apply local
analysis and troubleshooting skills possibly in concert with Splunk
Support.
If Splunk Enterprise starts with the integrity check disabled in limits.conf, then
REST file integrity information is not available. In addition, manual runs do not
update the results.
66
Administer Splunk Enterprise with the
command line interface (CLI)
You can find the Splunk installation path on your instance through Splunk Web
by clicking Settings > Server settings > General settings.
If you have administrator privileges, you can use the CLI not only to search but
also to configure and monitor your Splunk instance or instances. The CLI
commands used for configuring and monitoring Splunk are not search
commands. Search commands are arguments to the search and dispatch CLI
commands. Some commands require you to authenticate with a username and
password or specify a target Splunk server.
UNIX Windows
./splunk help ./splunk help
67
For more information about how to access help for specific CLI commands or
tasks, see "Get help with the CLI" and "Administrative CLI commands" in this
manual.
If you have administrator or root privileges, you can simplify CLI access by
adding the top level directory of your Splunk platform installation,
$SPLUNK_HOME/bin, to your shell path.
This example works for Linux/BSD/Solaris users who installed Splunk Enterprise
in the default location:
# export SPLUNK_HOME=/opt/splunk
# export PATH=$SPLUNK_HOME/bin:$PATH
This example works for Mac users who installed Splunk Enterprise in the default
location:
# export SPLUNK_HOME=/Applications/Splunk
# export PATH=$SPLUNK_HOME/bin:$PATH
./splunk <command>
Mac OS X requires superuser level access to run any command that accesses
system files or directories. Run CLI commands using sudo or "su -" for a new
shell as root. The recommended method is to use sudo. (By default the user
"root" is not enabled but any administrator user can use sudo.)
68
Work with the CLI on Windows
You do not need to set Splunk environment variables to use the CLI on Windows.
If you want to use environment variables to run CLI commands, you must set the
variables manually, because Windows does not set the variables by default.
69
Set Splunk environment variables permanently
After you complete this procedure, Windows uses the values you set for the
variables until you either change or delete the variable entries.
Answers
Have questions? Visit Splunk Answers and see what questions and answers the
Splunk community has around using the CLI.
If you need to find a CLI command or syntax for a CLI command, use Splunk's
built-in CLI help reference.
To start, you can access the default help information with the help command:
./splunk help
This will return a list of objects to help you access more specific CLI help topics,
such as administrative commands, clustering, forwarding, licensing, searching,
etc.
Universal parameters
Some commands require that you authenticate with a username and password,
or specify a target host or app. For these commands you can include one of the
universal parameters: auth, app, or uri.
70
./splunk [command] [object] [-parameter <value> | <value>]... [-app]
[-owner] [-uri] [-auth]
Parameter Description
Specify the App or namespace to run the command; for search,
app
defaults to the Search App.
Specify login credentials to execute commands that require you to
auth
be logged in.
Specify the owner/user context associated with an object; if not
owner
specified, defaults to the currently logged in user.
uri Excute a command on any specified (remote) Splunk server.
app
In the CLI, app is an object for many commands, such as create app or enable
app. But, it is also a parameter that you can add to a CLI command if you want to
run that command on a specific app.
Syntax:
For example, when you run a search in the CLI, it defaults to the Search app. If
want to run the search in another app:
auth
If a CLI command requires authentication, Splunk will prompt you to supply the
username and password. You can also use the -auth flag to pass this
information inline with the command. The auth parameter is also useful if you
need to run a command that requires different permissions to execute than the
currently logged-in user has.
Note: auth must be the last parameter specified in a CLI command argument.
Syntax:
71
./splunk command object [-parameter value]... -auth username:password
uri
If you want to run a command on a remote Splunk server, use the -uri flag to
specify the target host.
Syntax:
[http|https]://name_of_server:management_port
You can specify an IP address for the name_of_server. Both IPv4 and IPv6
formats are supported; for example, the specified-server may read as:
127.0.0.1:80 or "[2001:db8::1]:80". By default, splunkd listens on IPv4 only. To
enable IPv6 support, refer to the instructions in "Configure Splunk for IPv6".
Example: The following example returns search results from the remote
"splunkserver" on port 8089.
For more information about the CLI commands you can run on a remote server,
see the next topic in this chapter.
When you run the default Splunk CLI help, you will see these objects listed.
You can use the CLI for administrative functions such as adding or editing inputs,
updating configuration settings, and searching. If you want to see the list of
administrative CLI commands type in:
72
These commands are discussed in more detail in "Administrative CLI
commands", the next topic in this manual.
You can use the CLI to view and edit clustering configurations on the cluster
master or cluster peer. For the list of commands and parameters related to
clustering, type in:
For more information, read "Configure the cluster with the CLI" in the Managing
Indexers and Clusters manual.
Use the CLI to start, stop, and restart Splunk server (splunkd) and web
(splunkweb) processes or check to see if the process is running. For the list of
controls, type in:
For more information, read "Start and stop Splunk" in the Admin Manual.
When you add data to Splunk, Splunk processes it and stores it in an index. By
default, data you feed to Splunk is stored in the main index, but you can use the
CLI to create and specify other indexes for Splunk to use for different data inputs.
To see the list of objects and commands to manage indexes and datastores, type
in:
For more information, read "About managing indexes", "Create custom indexes",
and "Remove indexes and data from Splunk" in the Managing Indexers and
Clusters manual.
73
CLI help for distributed search deployments
Use the CLI to view and manage your distributed search configurations. For the
list of objects and commands, type in:
For information about distributed search, read "About distributed search" in the
Distributed Search manual.
For more information, read "About forwarding and receiving" in the Forwarding
Data manual.
You can also use the CLI to run both historical and real-time searches. Access
the help page about Splunk search and real-time search with:
Note: The Splunk CLI interprets spaces as breaks. Use dashes between multiple
words for topic names that are more than one word.
To learn more about searching your data with the CLI, refer to "About CLI
searches" and "Syntax for CLI searches" in the Search Reference Manual and
"Real-time searches and reports in the CLI" in the Search Manual.
74
Administrative CLI commands
This topic discusses the administrative CLI commands, which are the commands
used to manage or configure your Splunk server and distributed deployment.
For information about accessing the CLI and what is covered in the CLI help, see
the previous topic, Get help with the CLI. If you're looking for details about how to
run searches from the CLI, see About CLI searches in the Search Reference.
Your Splunk role configuration dictates what actions (commands) you can
execute. Most actions require you to have Splunk admin privileges. Read more
about setting up and managing Splunk users and roles in the About users and
roles topic in the Admin Manual.
75
./splunk add cluster-master
https://127.0.0.1:8089 -secret testsecret
-multisite false'
1. Replaces identifying data, such as
usernames and IP addresses, in the file located
at /tmp/messages.
76
2. globaldata refers to host tags and source
type aliases.
77
saved-search, ./splunk edit monitor /var/log
search-server, tcp, -follow-only true
udp, user
app, boot-start, 1. Sets the maintenance mode on peers in
deploy-client, indexer clustering. Must be invoked at the
deploy-server, master.
dist-search, index,
enable
listen, local-index, './splunk enable maintenance-mode'
maintenance-mode, 2. Enables the col1 collection.
perfmon, webserver,
web-ssl, wmi ./splunk enable perfmon col1
1. Exports data out of your Splunk server into
/tmp/apache_raw_404_logs.
78
inputstatus, 2. Lists all licenses across all stacks.
licenser-groups,
licenser-localslave, ./splunk list licenses
licenser-messages,
licenser-pools,
licenser-slaves,
licenser-stacks,
licenses, jobs,
master-info, monitor,
peer-info,
peer-buckets,
perfmon,
saved-search,
search-server, tcp,
udp, user, wmi
login,logout NONE
1. Used to shutdown the peer in a way that
does not affect existing searches. The master
rearranges the primary peers for buckets, and
fixes up the cluster state in case the
enforce-counts flag is set.
offline NONE
./splunk offline
2. Because the --enforce-counts flag is used,
the cluster is completely fixed up before this
peer is taken down.
79
3. Rebalances data using the optional
-max_runtime parameter to limit the
rebalancing activity to 5 minutes.
80
index_latest, ./splunk rtsearch 'error' -wrap false
max_time, maxout, 2. Runs a real-time search. Use rtsearch
output, preview, rt_id, exactly as you use the traditional search
timeout, uri, wrap command.
81
1. Uses main as the index to validate. Verifies
index paths specified in indexes.conf.
validate index
./splunk validate index main
version NONE
Exporting search results with the CLI
You can use the CLI to export large numbers of search results. For information
about how to export search results with the CLI, as well as information about the
other export methods offered by Splunk Enterprise, see Export search results in
the Search Manual.
The Splunk CLI also includes tools that help with troubleshooting. Invoke these
tools using the CLI command cmd:
For the list of CLI utilities, see Command line tools for use with Support in the
Troubleshooting Manual.
Note: Remote CLI access is disabled by default for the admin user until you have
changed its default password.
If you are running Splunk Free (which has no login credentials), remote access is
disabled by default until you've edited
82
$SPLUNK_HOME/etc/system/local/server.conf and set the value:
allowRemoteLogin=always
Note: The add oneshot command works on local instances but cannot be used
remotely.
For more information about editing configuration files, refer to About configuration
files in this manual.
The general syntax for using the uri parameter with any CLI command is:
[http|https]://name_of_server:management_port
Also, the name_of_server can be the fully resolved domain name or the IP
address of the remote Splunk Enterprise instance.
Important: This uri value is the mgmtHostPort value that you defined in web.conf
on the remote Splunk Enterprise instance. For more information, see the
web.conf reference in this manual.
For general information about the CLI, see About the CLI and Get help with the
CLI in this manual.
The following example returns search results from the remote "splunkserver".
For details on syntax for searching using the CLI, refer to About CLI searches in
the Search Reference Manual.
83
View apps installed on a remote instance
The following example returns the list of apps that are installed on the remote
"splunkserver".
You can set a default URI value using the SPLUNK_URI environment variable. If
you change this value to be the URI of the remote server, you do not need to
include the uri parameter each time you want to access that remote server.
$ export SPLUNK_URI=[http|https]://name_of_server:management_port #
For Unix shells
C:\> set SPLUNK_URI=[http|https]://name_of_server:management_port #
For Windows shell
For the examples above, you can change your SPLUNK_URI value by typing:
$ export SPLUNK_URI=https://splunkserver:8089
With the exception of commands that control the server, you can run all CLI
commands remotely. These server control commands include:
You can view all CLI commands by accessing the CLI help reference. See Get
help with the CLI in this manual.
84
for your CLI logins.
To create a custom login banner and add basic authentication, add the following
stanzas to your local server.conf file:
[httpServer]
cliLoginBanner = <string>
allowBasicAuth = true|false
basicAuthRealm = <string>
Create a message that you want your user to see in the Splunk CLI, such as
access policy information, before they are prompted for authentication
credentials. The default value is no message.
To create a multi-line banner, place the lines in a comma separated list, putting
each line in double-quotes. For example:
Set this value to true if you want to require clients to make authenticated
requests to the Splunk server using "HTTP Basic" authentication in addition to
Splunk's existing (authtoken) authentication. This is useful for allowing
programmatic access to REST endpoints and for allowing access to the REST
API from a web browser. It is not required for the UI or CLI. The default value is
true.
If you have enabled allowBasicAuth, use this attribute to add a text string that
can be presented in a Web browser when credentials are prompted. You can
display a short message that describes the server and/or access policy. The text:
"/splunk" displays by default.
85
Start Splunk Enterprise and perform initial
tasks
Splunk Enterprise installs with two services, splunkd and splunkweb. In normal
operation, only splunkd runs, handling all Splunk Enterprise operations, including
the Splunk Web interface. To change this, you must put Splunk Enterprise in
legacy mode. Read Start Splunk Enterprise on Windows in legacy mode.
You can start and stop Splunk on Windows in one of the following ways:
1. Start and stop Splunk Enterprise processes via the Windows Services control
panel (accessible from Start -> Control Panel -> Administrative Tools ->
Services)
2. Start and stop Splunk Enterprise services from a command prompt by using
the NET START <service> or NET STOP <service> commands:
86
> splunk [start|stop|restart]
If you want run Splunk Enterprise in legacy mode, where splunkd and splunkweb
both run, you must change a configuration parameter.
Important: Do not run Splunk Web in legacy mode permanently. Use legacy
mode to temporarily work around issues introduced by the new integration of the
user interface with the main splunkd service. Once you correct the issues, return
Splunk Web to normal mode as soon as possible.
[settings]
appServerPorts = 0
httpport = 8000
4. Save the file and close it.
5. Restart Splunk Enterprise. The splunkd and splunkweb services start and
remain running.
Splunk Enterprise installs with one process on *nix, splunkd. In normal operation,
only splunkd runs, handling all Splunk Enterprise operations, including the
Splunk Web interface. To change this, you must put Splunk Enterprise in legacy
mode. See "Start Splunk Enterprise on Unix in legacy mode."
87
Start Splunk Enterprise
From a shell prompt on the Splunk Enterprise server host, run this command:
# splunk start
Note: If you have configured Splunk Enterprise to start at boot time, you should
start it using the service command. This ensures that the user configured in the
init.d script starts the software.
or
# splunk restart
If you want run Splunk Enterprise in such a way that splunkd and splunkweb both
run, you must put Splunk Enterprise into legacy mode.
88
2. Make a copy of web.conf and place it into $SPLUNK_HOME/etc/system/local.
[settings]
appServerPorts = 0
httpport = 8000
5. Save the file and close it.
# splunk stop
or
To check if Splunk Enterprise is running, type this command at the shell prompt
on the server host:
# splunk status
89
splunkd is running (PID: 3162).
splunk helpers are running (PIDs: 3164).
If Splunk Enterprise runs in legacy mode, you will see an additional line in the
output:
If splunk status decides that the service is running it will return the status code
0, or success. If splunk status determines that the service is not running it will
return the Linux Standard Base value for a non-running service, 3. Other values
likely indicate splunk status has encountered an error.
You can also use ps to check for running Splunk Enterprise processes:
This will restart the splunkd and (in legacy mode only) the splunkweb processes.
90
configure the software to start at boot time after you install it.
You can configure the software as either the root user, or as a regular user with
the sudo command. Nearly all distributions include sudo but if yours does not
have it, you should consult the help for your distribution to download, install, and
configure it.
Splunk provides a utility that updates your system boot configuration so that the
software starts when the system boots up. This utility creates an init script (or
makes a similar configuration change, depending on your OS).
1. Log into the machine that you have installed Splunk software on and that
you want to configure to run at boot time.
2. Become the root user if able. Otherwise, you must run the following
commands with the sudo utility.
3. Run the following command:
[sudo] $SPLUNK_HOME/bin/splunk enable boot-start
If you do not run Splunk software as the root user, you can pass in the -user
parameter to specify the Splunk software user. The user that you want to run
Splunk software as must already exist. If it does not, then create the user prior to
running this procedure.
The following procedure configures Splunk software to start at boot time as the
user 'bob'. You can substitute 'bob' with the user that Splunk software should use
to start at boot time on the local machine.
91
[sudo] chown -R bob $SPLUNK_HOME
5.Using a text editor, open /etc/init.d/splunk for editing.
6. Make the following changes as shown in the "After" table:
Before
RETVAL=0
. /etc/init.d/functions
splunk_start() {
echo Starting Splunk...
"$SPLUNK_HOME/bin/splunk" start --no-prompt --answer-yes
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_stop() {
echo Stopping Splunk...
"$SPLUNK_HOME/bin/splunk" stop
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/splunk
}
splunk_restart() {
echo Restarting Splunk...
"$SPLUNK_HOME/bin/splunk" restart
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_status() {
echo Splunk status:
"$SPLUNK_HOME/bin/splunk" status
RETVAL=$?
}
case "$1" in
After
RETVAL=0
USER=bob
. /etc/init.d/functions
splunk_start() {
echo Starting Splunk...
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" start --no-prompt
--answer-yes'
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_stop() {
echo Stopping Splunk...
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" stop'
RETVAL=$?
92
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/splunk
}
splunk_restart() {
echo Restarting Splunk...
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" restart'
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_status() {
echo Splunk status:
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" status'
RETVAL=$?
}
case "$1" in
Confirm that each splunk command has single quotes around it.
7. Save the file and close it.
Changes take effect the next time you boot the machine.
These instructions work for both Splunk Enterprise and the AIX version of the
Splunk universal forwarder. Splunk does not offer a version of Splunk Enterprise
for AIX for versions later than 6.3.0.
The AIX version of Splunk does not register itself to auto-start on machine boot.
You can configure it to use the System Resource Controller (SRC) to handle
boot-time startup.
When you enable boot start on an AIX system, Splunk software interacts with the
AIX SRC to enable automatic starting and stopping of Splunk services.
When you enable automatic boot start, the SRC handles the run state of the
Splunk Enterprise service. You must use a different command to start and stop
Splunk software manually.
93
If you try to start and stop the software with the ./splunk [start|stop] method
from the $SPLUNK_HOME directory, the SRC catches the attempt and displays the
following message:
• For more information on the mkssys command line arguments, see Mkssys
command on the IBM pSeries and AIX Information Center website.
• For more information on the SRC, see System resource controller on the
IBM Knowledge Center website.
94
Enable boot-start on MacOS
If you want, you can still enable boot-start manually. You must either have root
level permissions or use sudo to run the following command. You must have at
least administrator access to your Mac to use sudo. If you installed Splunk
software in a different directory, replace the example below with your instance
location.
cd /Applications/Splunk/bin
4. Enable boot start:
cd /Applications/Splunk/bin
4. Enable boot start:
<key>UserName</key>
<string><user Splunk Enterprise should run as></string>
8. Save the file and close it.
Changes take effect the next time you boot the machine.
95
Disable boot-start
If you want to stop Splunk software from running at machine boot time, run:
By default, Splunk starts automatically when you start your Windows machine.
You can configure the Splunk processes (splunkd and splunkweb) to start
manually from the Windows Services control panel.
To learn more about boot-start and how to enable it, see the following:
Your registration authorizes you to receive a temporary (60 day) Enterprise trial
license, which allows a maximum indexing volume of 500 MB/day. This license is
included with your download.
For more information about Splunk licensing, read How Splunk licensing works in
this manual.
96
Where is your new license?
When you request a new license, you should receive the license in an email from
Splunk. You can also access that new license in your splunk.com My Orders
page.
To install and update your licenses via Splunk Web, navigate to Settings >
Licensing and follow these instructions.
You can change how Splunk Enterprise starts by setting environment variables
on your operating system.
On *nix, use the setenv or export commands to set a particular variable. For
example:
97
There are several environment variables that are available:
98
know what you are doing.
The name of the splunkweb service (on Windows) or
SPLUNK_WEB_NAME process (on *nix). Do not set this variable unless you
know what you are doing.
You can also edit these environment variables for each instance by editing
splunk-launch.conf or, in some cases, web.conf. This is handy when you run
more than one Splunk software instance on a host. See "splunk-launch.conf".
• The HTTP/HTTPS port. This port provides the socket for Splunk Web. It
defaults to 8000.
• The appserver port. 8065 by default.
• The management port. This port is used to communicate with the
splunkd daemon. Splunk Web talks to splunkd on this port, as does the
command line interface and any distributed connections from other
servers. This port defaults to 8089.
• The KV store port. 8191 by default.
Important: During installation, you might have set these ports to values other
than the defaults.
Note: Splunk instances receiving data from forwarders must be configured with
an additional port, the receiver port. They use this port to listen for incoming data
from forwarders. This configuration does not occur during installation. The default
receiver port is 9997. For more information, see "Enable a receiver" in the
Forwarding Data Manual.
99
Use Splunk CLI
To change the port settings via the Splunk CLI, use the CLI command set. For
example, this command sets the Splunk Web port to 9000:
The Splunk server name setting controls both the name displayed within Splunk
Web and the name sent to other Splunk Servers in a distributed setting.
The default name is taken from either the DNS or IP address of the Splunk
Server host.
To change the server name via the CLI, use the set servername command. For
example, this command sets the server name to foo:
The datastore is the top-level directory where the Splunk Server stores all
indexed data.
Note: If you change this directory, the server does not migrate old datastore files.
Instead, it starts over again at the new location.
100
To migrate your data to another directory follow the instructions in "Move an
index".
splunk restart
Important: Do not use the restart function inside Settings. This will not have the
intended effect of causing the index directory to change. You must restart from
the CLI.
To change the datastore directory via the CLI, use the set datastore-dir
command. For example, this command sets the datastore directory to
/var/splunk/:
The minimum free disk space setting controls how low disk space in the
datastore location can fall before Splunk software stops indexing.
101
4. Click General settings.
5. Change the value for Pause indexing if free disk space falls below, and
click Save.
To change the minimum free space value via the CLI, use the set minfreemb
command. For example, this command sets the minimum free space to 2000
MB:
The default time range for ad hoc searches in the Search & Reporting App is set
to Last 24 hours. An administrator can set the default time range globally,
across all apps. The setting is stored in
SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf file in the
[general_default] stanza.
This setting applies to all Search pages in Splunk Apps, not just the Search &
Reporting App. This setting applies to all user roles.
2. Click Settings.
5. From the Default search time range drop-down, select the time that you want
to use and click Save.
You might already have a time range setting in the ui-prefs.conf file for a
specific application or user. The settings in the ui-prefs.conf file take
precedence over any settings that you make to the global default time range
102
using Splunk Web.
However, if you want to use the global default time range for all users and
applications, consider removing the settings you have in the ui-prefs.conf file.
The Splunk Web Settings General Settings screen has a few other default
settings that you might want to change. Explore the screen to see the range of
options.
See also
user-prefs.conf
ui-prefs.conf
Bind Splunk to an IP
You can force Splunk to bind its ports to a specified IP address. By default,
Splunk will bind to the IP address 0.0.0.0, meaning all available IP addresses.
Changing Splunk's bind IP only applies to the Splunk daemon (splunkd), which
listens on:
To bind the Splunk Web process (splunkweb) to a specific IP, use the
server.socket_host setting in web.conf.
Temporarily
103
Permanently
# Modify the following line to suit the location of your Splunk install.
# If unset, Splunk will use the parent of the directory this
configuration
# file was found in
#
# SPLUNK_HOME=/opt/splunk
SPLUNK_BINDIP=127.0.0.1
SPLUNK_BINDIP=10.10.10.1
you must also make this change in web.conf (assuming the management port is
8089):
mgmtHostPort=10.10.10.1:8089
IPv6 considerations
Starting in version 4.3, the web.conf mgmtHostPort setting has been extended to
allow it to take IPv6 addresses if they are enclosed in square brackets.
Therefore, if you configure splunkd to only listen on IPv6 (via the setting in
server.conf described in "Configure Splunk for IPv6" in this manual), you must
change this from 127.0.0.1:8089 to [::1]:8089.
104
This topic discusses Splunk's support for IPv6 and how to configure it. Before
following the procedures in this topic, you may want to review:
Starting in version 4.3, Splunk supports IPv6. Users can connect to Splunk Web,
use the CLI, and forward data over IPv6 networks.
• HPUX PA-RISC
• Solaris 8, and 9
• AIX
You have a few options when configuring Splunk to listen over IPv6. You can
configure Splunk to:
• connect to IPv6 addresses only and ignore all IPv4 results from DNS
• connect to both IPv4 and IPv6 addresses and
♦ try the IPv6 address first
♦ try the IPv4 address first
• connect to IPv4 addresses only and ignore all IPv6 results from DNS
listenOnIPv6=[yes|no|only]
• yes means that splunkd will listen for connections from both IPv6 and
IPv4.
• no means that splunkd will listen on IPv4 only, this is the default setting.
105
• only means that Splunk will listen for incoming connections on IPv6 only.
connectUsingIpVersion=[4-first|6-first|4-only|6-only|auto]
• 4-first means splunkd will try to connect to the IPv4 address first and if
that fails, try IPv6.
• 6-first is the reverse of 4-first. This is the policy most IPv6-enabled
client apps like web browsers take, but can be less robust in the early
stages of IPv6 deployment.
• 4-only means that splunkd will ignore any IPv6 results from DNS.
• 6-only means that splunkd will Ignore any IPv4 results from DNS.
• auto means that splunkd picks a reasonable policy based on the setting of
listenOnIPv6. This is the default value.
♦ If splunkd is listening only on IPv4, this behaves as though you
specified 4-only.
♦ If splunkd is listening only on IPv6, this behaves as though you
specified 6-only.
♦ If splunkd is listening on both, this behaves as though you specified
6-first.
Important: These settings only affect DNS lookups. For example, a setting of
connectUsingIpVersion = 6-first will not prevent a stanza with an explicit IPv4
address (like "server=10.1.2.3:9001") from working.
If you have just a few inputs and don't want to enable IPv6 for
your entire deployment
If you've just got a few data sources coming over IPv6 but don't want to enable it
for your entire Splunk deployment, you can add the listenOnIPv6 setting
described above to any [udp], [tcp], [tcp-ssl], [splunktcp], or
[splunktcp-ssl] stanza in inputs.conf. This overrides the setting of the same
name in server.conf for that particular input.
Your Splunk forwarders can forward over IPv6; the following are supported in
outputs.conf:
• The server setting in [tcpout] stanzas can include IPv6 addresses in the
standard [host]:port format.
• The [tcpout-server] stanza can take an IPv6 address in the standard
[host]:port format.
106
• The server setting in [syslog] stanzas can include IPv6 addresses in the
standard [host]:port format.
Your Splunk distributed search deployment can use IPv6; the following are
supported in distsearch.conf:
If your network policy allows or requires IPv6 connections from web browsers,
you can configure the splunkweb service to behave differently than splunkd.
Starting in 4.3, web.conf supports a listenOnIPv6 setting. This setting behaves
exactly like the one in server.conf described above, but applies only to Splunk
Web.
The existing web.conf mgmtHostPort setting has been extended to allow it to take
IPv6 addresses if they are enclosed in square brackets. Therefore, if you
configure splunkd to only listen on IPv6 (via the setting in server.conf described
above), you must change this from 127.0.0.1:8089 to [::1]:8089.
The Splunk CLI can communicate to splunkd over IPv6. This works if you have
set mgmtHostPort in web.conf, defined the $SPLUNK_URI environment variable, or
use the -uri command line option. When using the -uri option, be sure to
enclose IPv6 IP address in brackets and the entire address and port in quotes,
for example: -uri "[2001:db8::1]:80".
If you are using IPv6 with SSO, you do not use the square bracket notation for
the trustedIP property, as shown in the example below. This applies to both
web.conf and server.conf.
107
In the following web.conf example, the mgmtHostPort attribute uses the square
bracket notation, but the trustedIP attribute does not:
[settings]
mgmtHostPort = [::1]:8089
startwebserver = 1
listenOnIPv6=yes
trustedIP=2620:70:8000:c205:250:56ff:fe92:1c7,::1,2620:70:8000:c205::129
SSOMode = strict
remoteUser = X-Remote-User
tools.proxy.on = true
For more information on SSO, see "Configure Single Sign-on" in the Securing
Splunk Enterprise manual.
• Set up users and roles. You can configure users using Splunks native
authentication and/or use LDAP to manage users. See About user
authentication
• Set up certificate authentication (SSL). Splunk ships with a set of default
certificates that should be replaced for secure authentication. We provide
guidelines and further instructions for adding SSL encryption and
authentication and Configure secure authentication.
The Securing Splunk Enterprise manual provides more information about ways
you can secure Splunk. Including a checklist for hardening your configuration.
See Securing Splunk Enterprise for more information.
108
Splunk apps
In addition to the data enumerated in this topic, certain apps might collect usage
data. See the documentation for your app for details. The following apps collect
additional data. Check back for updates.
• Splunk App for AWS: Share data in the Splunk App for AWS
• Splunk Add-on Builder: Share data in Splunk Add-on Builder
• Splunk DB Connect: Share data in Splunk DB Connect
• Splunk App for ServiceNow: Share data in the Splunk App for ServiceNow
• Splunk Security Essentials: Sending usage data to Splunk for Splunk
Security Essentials
• Splunk Enterprise Security: Share data in Splunk Enterprise Security
• Splunk Industrial Asset Intelligence: Share data in Splunk Industrial Asset
Intelligence
• Splunk Machine Learning Toolkit: Share data in the Splunk Machine
Learning Toolkit
The table below summarizes the data that your Splunk platform deployment can
send to Splunk. Follow the links for more information.
109
portion of improve
anonymized products and
usage data services.
Used by
Support and
Support Customer
usage data Settings > Settings > Success teams
No.
(not Web Instrumentation Instrumentation to troubleshoot
analytics) and improve a
customer's
implementation.
Used by
Support and
Web
Customer
analytics
Settings > See What usage Success teams
portion of No.
Instrumentation data is collected. to troubleshoot
Support
and improve a
usage data
customer's
implementation.
Used by Splunk
software to
display a
message in
Splunk Web
See About See About when a new
Banner
Yes. update checker update checker version is
phone home
data. data. available, and
by Splunk to
understand
aggregate
usage
information.
Usage data
Consult the app Consult the app Consult the app Consult the app
collected by
documentation. documentation. documentation. documentation.
Splunk apps
Diagnostic No. Sent to Support Run the diag Used by
files by request. See command with Support to
Generate a diag appropriate flags, troubleshoot an
in the inspect the file it open case.
Troubleshooting creates before
Manual. you upload it to
110
your case.
Opt in or out of sharing usage data
The first time you run Splunk Web on a search head as an admin or equivalent,
you are presented with a modal window that has the following two selectable
check boxes:
Neither anonymized nor Support usage data is sent unless you click OK with one
or both boxes checked. You can opt in or out at any time by navigating to
Settings > Instrumentation.
To enable or disable collection of usage data, your user role must include the
edit_telemetry_settings capability.
Opt out of sharing all usage data and prevent future admins from enabling
sharing
The opt-in modal controls sharing for anonymized and Support data, but license
usage data is sent by default for new installations starting in Splunk Enterprise
7.0.0.
111
To opt out from all collection of usage data and prevent other admins from
enabling it in the future, do the following on one search head in each cluster and
on each nonclustered search head:
For license usage data, the anonymized usage data that is not browser session
data, and the Support usage data that is not session data, you can view what
data has been recently sent in Splunk Web.
This log of data is available only after the first run of the collection. To inspect the
type of data that gets sent before you opt in on your production environment, you
can opt in on your sandbox environment.
For the usage data logs to be created and available, your search heads,
indexers, and cluster master must be running Splunk Enterprise version 6.5.0 or
later.
To view the remaining anonymized or Support usage data, the browser session
data, use JavaScript logging in your browser. Look for network events sent to a
URL containing splkmobile. Events are triggered by actions such as navigating
to a new page in Splunk Web.
112
The tables below describe the data collected if you opt in to both usage data
programs and do not turn off update checker. The usage data is in JSON format
tagged with a field named component.
Starting in Splunk Enterprise 7.0.0, you have the option of sending Support data.
This is the same data as the anonymized usage data, but if you opt to send
Support data, Splunk can use the license GUID to identify usage data from a
specific customer account.
Upon upgrade, you are presented with an opt-in modal advising you of additional
data collection.
In addition, the following pieces of data are included starting with Splunk
Enterprise version 7.0.0:
Topology information:
• license slaves
• indexer cluster members
• indexer cluster search heads
• distributed search peers
• search head cluster members
Index information:
App information:
113
Types of data collected by Splunk Enterprise
Support usage data is the same as the anonymized usage data, but the license
GUID is persisted when it reaches Splunk.
Note that additional data might be collected by certain apps. See app
documentation for details.
114
Indexer cluster Collected by a
member search running on
the cluster master.
Collected by a
Indexer cluster
deployment.clustering.searchhead search running on
search head
the cluster master.
Number of hosts,
number of Splunk
software instances,
OS/version, CPU Collected for
deployment.forwarders
architecture, Splunk forwarders.
software version,
distribution of
forwarding volume
Collected by a
search running on
a search head
Distributed search captain or, in the
deployment.distsearch.peer
peers absence of a
search head
cluster, a search
head.
Collected by a
search running on
a search head
Indexes per search cluster captain or,
deployment.index
peer in the absence of
a search head
cluster, a search
head.
Collected by a
License slaves deployment.licensing.slave search running on
the license master.
GUID, host, number deployment.node For each indexer
of cores by type or search head.
(virtual/physical),
CPU architecture,
memory size, storage
(partition) capacity,
OS/version, Splunk
115
version
Core utilization,
storage utilization, deployment.node
memory usage, performance.indexing
indexing throughput, performance.search
search latency
Collected by a
Search head cluster search running on
deployment.shclustering.member
members the search head
captain.
Indexing volume,
number of events,
usage.indexing.sourcetype
number of hosts,
source type name
Number of active
usage.users.active
users
Number of searches
of each type, usage.search.type
distribution of usage.search.concurrent
concurrent searches
Collected by a
search running on
a search head
Apps installed on
cluster captain or,
search head and deployment.app
in the absence of
search peers
a search head
cluster, a search
head.
App name, page
name, locale, number
usage.app.page Session data.
of users, number of
page loads
deploymentID app.session.session_start Session data.
(identifier for Triggered when
deployment), eventID user is first
(identifier for this authenticated.
specific event),
experienceID
(identifier for this
116
session), userID
(hashed username),
data.guid (GUID for
instance serving the
page)
Session data.
Triggered when
Page views app.session.pageview
user visits a new
page.
Tracks user page
interactions within
the adding data
context. This
includes any data
Adding data page
app.session.page.interact sources searched
interaction
for, which
collection method
is chosen, and
what deployment
type is selected.
Tracks loads and
whether web
serviced is
Page loads app.session.page.load
supported.
Triggered when a
page is loaded.
Session data.
Dashboard Triggered when a
app.session.dashboard.pageview
characteristics dashboard is
loaded.
Session data.
Pivot characteristics app.session.pivot.load Triggered when a
pivot is loaded.
Session data.
Triggered when a
Pivot changes app.session.pivot.interact
change is made to
a pivot.
Search page app.session.search.interact Session data.
interaction Triggered with
interaction with
117
search page.
License usage data
{
"component": "deployment.app",
"data": {
"name": "alert_logevent",
"enabled": true,
"version": "7.0.0",
"host": "ip-10-222-17-130"
},
"visibility": "anonymous,support",
"timestamp": 1502845738,
"date": "2017-08-15",
"transactionID": "01AFCDA0-2857-423A-E60D-483007F38C1A",
"executionID": "2A8037F2793D5C66F61F5EE1F294DC",
"version": "2",
"deploymentID": "9a003584-6711-5fdc-bba7-416de828023b"
}
For ease of use, the following tables show examples of only the "data" field from
the JSON event.
118
Component Data category E
{
Apps installed "name": "alert_logevent",
"enabled": true,
deployment.app on search head "version": "7.0.0",
and peers "host": "ip-10-222-17-130"
}
{
"host": "docteam-unix-5",
"summaryReplication": true,
"siteReplicationFactor": null,
"enabled": true,
Clustering
deployment.clustering.indexer "multiSite": false,
configuration "searchFactor": 2,
"siteSearchFactor": null,
"timezone": "-0700",
"replicationFactor": 3
}
{
"site": "default",
"master": "ip-10-212-28-184",
Indexer cluster "member": {
deployment.clustering.member
member "status": "Up",
"guid": "471A2F25-CD92-4250-AA17-
"host": "ip-10-212-28-4"
}
}
{
"site": "default",
"master": "ip-10-222-27-244",
"searchhead": {
Indexer cluster
deployment.clustering.searchhead "status": "Connected",
search head "guid": "1D4D422A-ADDE-437D-BA07-
"host": "ip-10-212-55-3"
}
}
{
"peer": {
"status": "Up",
Distributed "guid": "472A5F22-CC92-4220-AA17-
deployment.distsearch.peer
search peers "host": "ip-10-222-21-4"
},
"host": "ip-10-222-27-244"
}
deployment.forwarders Forwarder {
architecture, "hosts": 168,
"instances": 497,
119
forwarding "architecture": "x86_64",
volume "os": "Linux",
"splunkVersion": "6.5.0",
"type": "uf",
"bytes": {
"min": 389,
"max": 2291497,
"total": 189124803,
"p10": 40960,
"p20": 139264,
"p30": 216064,
"p40": 269312,
"p50": 318157,
"p60": 345088,
"p70": 393216,
"p80": 489472,
"p90": 781312
}
}
deployment.index Indexes per {
search peer "name": "_audit",
"type": "events",
"total": {
"rawSizeGB": null,
"maxTime": 1502845730.0,
"events": 1,
"maxDataSizeGB": 488.28,
"currentDBSizeGB": 0.0,
"minTime": 1502845719.0,
"buckets": 0
},
"host": "ip-10-222-17-130",
"buckets": {
"thawed": {
"events": 0,
"sizeGB": 0.0,
"count": 0
},
"warm": {
"sizeGB": 0.0,
"count": 0
},
"cold": {
"events": 0,
"sizeGB": 0.0,
"count": 0
},
"coldCapacityGB": "unlimited",
"hot": {
"sizeGB": 0.0,
"max": 3,
120
"count": 0
},
"homeEventCount": 0,
"homeCapacityGB": "unlimited"
},
"app": "system"
}
}
{
"master": "9d5c20b4f7cc",
"slave": {
"pool": "auto_generated_pool_ente
deployment.licensing.slave License slaves "guid": "A5FD9178-2E76-4149-9FGF-
"host": "9d5c20b4f7cc"
}
}
deployment.node Host {
architecture, "guid": "123309CB-ABCD-4BC9-9B6A-18
"host": "docteam-unix-3",
utilization "os": "Linux",
"osExt": "Linux",
"osVersion": "3.10.0-123.el7.x86_64
"splunkVersion": "6.5.0",
"cpu": {
"coreCount": 2,
"utilization": {
"min": 0.01,
"p10": 0.01,
"p20": 0.01,
"p30": 0.01,
"p40": 0.01,
"p50": 0.02,
"p60": 0.02,
"p70": 0.03,
"p80": 0.03,
"p90": 0.05,
"max": 0.44
},
"virtualCoreCount": 2,
"architecture": "x86_64"
},
"memory": {
"utilization": {
"min": 0.26,
"max": 0.34,
"p10": 0.27,
"p20": 0.28,
"p30": 0.28,
"p40": 0.28,
"p50": 0.29,
121
"p60": 0.29,
"p70": 0.29,
"p80": 0.3,
"p90": 0.31
},
"capacity": 3977003401
},
"disk": {
"fileSystem": "xfs",
"capacity": 124014034944,
"utilization": 0.12
}
}
{
"site": "default",
"member": {
"status": "Up",
depoyment.shclustering.member "guid": "290C48B1-50D3-48C9-AF86-
"host": "ip-10-222-19-223"
},
"captain": "ip-10-222-19-253"
}
{
"type": "download-trial",
"guid": "4F735357-F278-4AD2-BBAB-13
"product": "enterprise",
"name": "download-trial",
"licenseIDs": [
"553A0D4F-3B7B-4AD5-B241-89B943
],
Licensing quota "quota": 524288000,
licensing.stack and "pools": [
consumption {
"quota": 524288000,
"consumption": 304049405
}
],
"consumption": 304049405,
"subgroup": "Production",
"host": "docteam-unix-9"
}
performance.indexing Indexing {
throughput and "host": "docteam-unix-5",
"thruput": {
volume "min": 412,
"max": 9225,
"total": 42980219,
"p10": 413,
"p20": 413,
"p30": 431,
122
"p40": 450,
"p50": 474,
"p60": 488,
"p70": 488,
"p80": 488,
"p90": 518
}
}
{
"latency": {
"min": 0.01,
"max": 1.33,
"p10": 0.02,
"p20": 0.02,
"p30": 0.05,
Search runtime
performance.search "p40": 0.16,
statistics "p50": 0.17,
"p60": 0.2,
"p70": 0.26,
"p80": 0.34,
"p90": 0.8
}
}
app.session.dashboard.pageview Dashboard {
characteristics, "dashboard": {
"autoRun": false,
triggered when "hideEdit": false,
a dashboard is "numCustomCss": 0,
loaded. "isVisible": true,
"numCustomJs": 0,
"hideFilters": false,
"hideChrome": false,
"hideAppBar": false,
"hideFooter": false,
"submitButton": false,
"refresh": 0,
"hideSplunkBar": false,
"hideTitle": false,
"isScheduled": false
},
"numElements": 1,
"numSearches": 1,
"numPanels": 1,
"elementTypeCounts": {
"column": 1
},
"layoutType": "row-column-layou
"searchTypeCounts": {
"inline": 1
},
123
"name": "test_dashboard",
"numFormInputs": 0,
"formInputTypeCounts": {},
"numPrebuiltPanels": 0,
"app": "search"
}
}
{
"eventAction": "change",
"eventLabel": "Pivot - Report C
"numColumnSplits": 0,
"reportProps": {
"display.visualizations.cha
"display.visualizations.typ
"earliest": "0",
"display.statistics.show":
Changes to "display.visualizations.cha
pivots. "display.visualizations.cha
"-90",
Generated
app.session.pivot.interact "display.visualizations.sho
when a change "display.general.type": "vi
to a pivot is },
made. "numRowSplits": 1,
"eventCategory": "PivotEditorRe
"app": "search",
"page": "pivot",
"numAggregations": 1,
"numCustomFilters": 0,
"eventValue": {},
"locale": "en-US",
"context": "pivot"
}
app.session.pivot.load {
"eventAction": "load",
"eventLabel": "Pivot - Page",
"numColumnSplits": 0,
"reportProps": {
"display.visualizations.cha
"display.visualizations.typ
"earliest": "0",
"display.statistics.show":
"display.visualizations.cha
"display.visualizations.sho
"display.general.type": "vi
},
"numRowSplits": 1,
"eventCategory": "PivotEditor",
"app": "search",
"page": "pivot",
"numAggregations": 1,
124
"numCustomFilters": 0,
"locale": "en-US",
"context": "pivot"
}
"component":"app.session.page.load",
"visibility":"anonymous,support",
Triggered when "timestamp":1530637605818,
app.session.page.load a new page "userID":"890e662510aa0462112a4927b0
loads. "experienceID":"dd7136a3-2584-2e7f-1
"deploymentID":"98dfc5ff-756c-5b01-9
"eventID":"b06d0493-a3b8-3cae-52ee-8
"version":"3"
| "component":"app.session.page.inter
Triggered when "visibility":"anonymous,support",
"timestamp":1530297674543,
a query string
app.session.search.interact "userID":"890e662510aa0462112a4927b0
is run as a "experienceID":"dd7136a3-2584-2e7f-1
search. "deploymentID":"98dfc5ff-756c-5b01-9
"eventID":"bbc6244e-587d-17ee-f8a9-9
"version":"3"
{
"app": "launcher",
app.session.pageview
"page": "home"
}
{
"app": "launcher",
"splunkVersion": "6.6.0",
"os": "Ubuntu",
"browser": "Firefox",
"browserVersion": "38.0",
app.session.session_start
"locale": "en-US",
"device": "Linux x86_64",
"osVersion": "not available",
"page": "home",
"guid": "2550FC44-64E5-43P5-AS4
}
{
"app": "search",
"locale": "en-US",
App page users
usage.app.page "occurrences": 1,
and views "page": "datasets",
"users": 1
}
usage.indexing.sourcetype Indexing by {
source type "name": "vendor_sales",
"bytes": 2026348,
125
"events": 30245,
"hosts:" 1
}
{
"host": "docteam-unix-5"
"searches": {
"min": 1,
"max": 11,
"p10": 1,
"p20": 1,
Search "p30": 1,
usage.search.concurrent
concurrency "p40": 1,
"p50": 1,
"p60": 1,
"p70": 1,
"p80": 2,
"p90": 3
}
}
Report {
"existing_report_accelerations": 2,
usage.search.report_acceleration acceleration
"access_count_of_existing_report_ac
metrics }
{
Searches by "ad-hoc": 1428,
usage.search.type
type "scheduled": 225
}
{
usage.users.active Active users "active": 23
}
License usage data
Data
Component Example
category
licensing.stack Licensing {
quota and "type": "download-trial",
"guid":
consumption "4F735357-F278-4AD2-BBAB-139A85A75DBB",
"product": "enterprise",
"name": "download-trial",
"licenseIDs": [
"553A0D4F-3B7B-4AD5-B241-89B94386A07F"
],
"quota": 524288000,
"pools": [
126
{
"quota": 524288000,
"consumption": 304049405
}
],
"consumption": 304049405,
"subgroup": "Production",
"host": "docteam-unix-9"
}
What data is not collected
Certain license programs require that you report your license usage. The easiest
way to do this is to automatically send this information to Splunk.
If you do not enable automatic license data sharing, you can send this data
manually. To send usage data manually:
127
1. On a search head, log into Splunk Web.
2. Select Settings > Instrumentation.
3. Click Export.
4. Select a date range and data type.
5. Click Send or Export to send data to Splunk or export data to your local
machine.
Feature footprint
Anonymized, Support, and license usage data is summarized and sent once per
day, starting at 3:05 a.m.
Session data and update checker data is sent from your browser as the events
are generated. The performance implications are negligible.
About searches
One primary instance in your deployment runs the distributed searches to collect
most of the usage data. Which instance that is depends on the details of your
deployment:
If you opt out of instrumentation, the searches on this primary instance do not
run.
128
sent.
After the searches run, the data is packaged and sent to Splunk, as well as
indexed to the _telemetry index. The _telemetry index is retained for two years
by default and is limited in size to 256 MB.
If all instances in your deployment are running Splunk Enterprise version 7.1.0 or
later, you can schedule instrumentation to run starting at any hour of the day, on
a daily or a weekly schedule.
The collection process in a deployment begins at the top of the hour, for
example, at 3:00 A.M. The process runs a few searches in sequence on several
instances in your deployment. Depending on the size of your deployment and
whether you run instrumentation daily or weekly, it can take a few minutes before
the final searches run on the primary instance to package and send the data to
Splunk. See Which instance runs the searches.
If you opt in to instrumentation, the collection process begins daily at 3:00 A.M by
default.
129
Change the collection schedule using configuration files
You can change the collection schedule by editing the telemetry.conf file. For
guidelines on editing this file, see telemetry.conf.spec.
Two types of update checker data are sent, Enterprise update checker data and
app update checker data.
For more information about the data that your deployment can send to Splunk,
see Share data in Splunk Enterprise.
Update checker data about Splunk Enterprise is sent to Splunk by your browser
soon after you log into Splunk Web. To view the data that is sent for Splunk
Enterprise, watch JavaScript network traffic as you log into Splunk Web. The
data is sent inside a call to quickdraw.splunk.com.
You can turn off update checker reporting for Splunk Enterprise in web.conf, by
setting the updateCheckerBaseURL attribute to 0. See About configuration files.
Description Example
CPU architecture x86_64
130
Operating system Linux
Product enterprise
Splunk roles admin
License group, subgroup, and hashed GUID Enterprise, Production, <GUID>
Splunk software version 7.0.0
App update checker data
Update checker data about your Splunk apps is sent to Splunk daily via a REST
call from splunkd to splunkbase.splunk.com. This data is correlated with
information about app downloads to populate the app analytics views on
Splunkbase for an app's developer, and to compute the number of installs on the
app details page.
You can turn off update checker reporting for a Splunk app in app.conf in the app
directory. Set the check_for_updates setting to false.
Description Example
App ID, name, and version gettingstarted, Getting Started, 1.0
Splunk version 7.0
Platform, architecture Darwin, x86_64
131
Configure Splunk licenses
Licenses specify how much external data you can index per day.
For event data, data volume is based on the amount of raw external data that
the indexer ingests into its indexing pipeline, after any filtering. It is not based on
the amount of compressed data that gets written to disk.
For metrics data, each metric event counts as a fixed 150 bytes. Metrics data
does not have a separate license. Ingested metrics data draws from the same
license quota as event data.
When you first install a downloaded copy of Splunk Enterprise, the installed
instance uses a 60 day trial license. This license allows you to try out all of the
features in Splunk Enterprise for 60 days, and to index up to 500 MB of data per
day.
If you want to continue using Splunk Enterprise features after the 60 day trial
expires, you must purchase an Enterprise license. Contact a Splunk sales rep to
learn more. See Types of Splunk licenses for information on Enterprise licenses.
132
If you do not install an Enterprise license after the 60 day trial expires, you can
switch to Splunk Free. Splunk Free includes a a subset of the features of Splunk
Enterprise. It allows you to index up to 500 MB of data a day indefinitely. See
About Splunk Free
Splunk Free does not include authentication. This means that any user can
access your installation through Splunk Web or the CLI without providing
credentials.
Additionally, Splunk Free does not include scheduled saved searches or alerts,
so any saved searches or alerts that you have previously configured will no
longer run once you switch to Splunk Free.
133
Splunk Enterprise licenses
There are several types of Splunk Enterprise licenses. They all include access to
the same set of Splunk Enterprise features, including authentication, distributed
search, deployment management, scheduling of alerts, and role-based access
controls.
The standard Splunk Enterprise license is available for purchase and can be
configured for any indexing volume. Contact Splunk Sales for information.
No-enforcement license
Starting with version 6.5, Splunk Enterprise no longer disables search when you
exceed your licensed data ingestion quota. Users can keep searching even if the
license master acquires five license violation warnings in a 30 day window. The
license master is still in violation, but search is no longer blocked.
This no-enforcement behavior is built into all new Enterprise licenses for 6.5 or
later.
When you download Splunk software for the first time, you are asked to register.
Your registration authorizes you to receive an Enterprise Trial license, which
allows a maximum indexing volume of 500 MB/day. The Enterprise Trial license
expires 60 days after you start using Splunk software. At that point, you must
either purchase an Enterprise license or switch to a Free license with a limited
feature set.
The Trial license is for standalone, single-instance use. For a trial Splunk
Enterprise distributed deployment, consisting of multiple Splunk Enterprise
instances, each instance must use its own Trial license. This differ from a
distributed deployment running under a full Enterprise license, where you install
134
the Enterprise license on a central license master and then simply point the other
instances to the license master. See Configure a license master.
The standard Enterprise Trial license expires 60 days after you start using
Splunk software and allows a maximum indexing volume of 500 MB/day. If you
are preparing a pilot for a large deployment and have requirements for a trial of
longer duration or higher indexing volume, contact Splunk Sales or your sales
representative directly with your request.
Dev/Test licenses
A Dev/Test license does not stack with an Enterprise license. If you install a
Dev/Test license over an Enterprise license, it replaces the Enterprise license
file.
Free license
The Free license includes 500 MB/day of indexing volume, is free of charge, and
has no expiration date.
A number of features that are available with the Enterprise license are disabled in
Splunk Free, including:
135
♦ Searches are run against all public indexes.
♦ Restrictions on search such as user quotas, maximum per-search
time ranges, search filters are not supported.
♦ The capability system is disabled. All capabilities are enabled for all
users accessing Splunk software.
Splunk for Industrial IoT has its own license which is not stackable with other
licenses. This license gives you access to Splunk Enterprise and an entitlement
for a set of apps. For details of what products are included, see Splunk for
Industrial IoT.
For more information about this license, see Licensing for Splunk for Industrial
IoT.
Consult this table for a comparison of the major Splunk Enterprise license types.
6.5+ Sales
Behavior Pre-6.5 Dev/Test Free IoT
(no-enforcement) Trial
Blocks search while
yes no varies yes yes no
in violation
Logs internally and
displays message in
yes yes yes yes yes yes
Splunk Web when in
warning or violation
Stacks with other
yes yes no yes no no
licenses
Enables full
Enterprise feature yes yes no yes no yes
set
Forwarder license
The Forwarder license allows forwarding of unlimited data. Unlike a Free license,
it enables authentication.
136
The Forwarder license is available only for instances that simply forward data. It
is not valid for use on instances that also perform additional functions, such as
indexing.
Forwarder licenses are included with Splunk. You do not need to purchase them
separately.
Beta license
Splunk's Beta releases require their own Beta licenses, which are not compatible
with other Splunk releases.
Beta licenses typically enable Enterprise features, but only for the specified Beta
release.
137
This topic discusses the license requirements for each component type. For
more information on the types of licenses discussed in this topic, see Types of
Splunk software licenses.
License requirements
This table provides a summary of the license needs for the various Splunk
Enterprise component types.
138
Heavy forwarders that index data need
Heavy forwarder Forwarder access to an Enterprise license instead
of a Forwarder license.
Components and licensing issues
Indexers
Search heads
Forwarders
Forwarders ingest data and forward that data to another forwarder or an indexer.
Because data is not metered until it is actually indexed, forwarders do not usually
incur license usage.
Note: A forwarder can use the Free license instead of a Forwarder license, but
some important functionality is unavailable with a Free license. In particular, a
forwarder using a Free license cannot be a deployment client and it cannot make
use of authentication. See About Splunk Free.
139
Management components
Each indexer cluster node requires an Enterprise license. There are a few
license issues that are specific to indexer clusters:
A search head cluster is a group of search heads that coordinate their activities.
Each search head in a search head cluster is referred to as a member.
The search head cluster deployer, which distributes apps to the members, also
needs access to an Enterprise license.
140
Besides indexers, other Splunk Enterprise instances must be assigned to a
Splunk Enterprise license pool, so that they can access certain Splunk Enterprise
features, such as distributed search. As a general rule, assign all of your Splunk
Enterprise instances, with the exception of forwarders, to a license pool. See
Licenses and distributed deployments.
Note: Stacks and pools are not available with the Free or Enterprise Trial
licenses.
Stacks
• Enterprise Trial
• Free
• Dev/Test. If you install a Dev/Test license over an Enterprise license, the
Enterprise license will be deleted.
• Forwarder
Groups
A license group contains zero or more stacks. A stack can be a member of only
141
one group.
Only one group can be active at a time. This means that a given license master
can only administer pools of licenses of one group at a time.
Subgroups
Pools
A license pool consists of licensing volume allocated from a stack. A stack can
contain multiple pools, each with a portion of the stack's total licensing volume.
The license master manages the pools. Each of the master's license slaves
can access only a single pool.
You can manage volume usage by creating multiple pools and assigning
indexers to specific pools. For example, you can assign your production and test
indexers to separate pools. That way, you can ensure that testing activity does
not impinge on production needs.
142
License master
License slaves
Install a license
This topic describes how to install new Enterprise licenses.
143
1. Click Choose file and browse for your license file and select it, or
2. Click copy & paste the license XML directly... and paste the text
of your license file into the provided field.
4. Click Install.
5. If this is the first Enterprise license that you are installing on the instance,
you must restart Splunk Enterprise.
If you have a single Splunk Enterprise instance, it serves as its own license
manager, once you install an Enterprise license on it. You do not need to further
configure it as a license master.
If you have multiple Splunk Enterprise instances, you usually want to manage
their license access from a central location.To do this, you must configure one
instance as a central license master. You then designate each of the remaining
Splunk Enterprise instances as license slaves of the license master.
144
Deploy the central license master
The license master does not usually need to run on a dedicated Splunk
Enterprise instance. Instead, you can colocate it on an instance that is also
performing other tasks:
Compatibility between the master and its slaves requires that their versions
follow all of these rules:
For example:
• A 7.1 master is compatible with 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 7.0, and
7.1 slaves.
145
• A 6.5 master is compatible with 6.0, 6.1, 6.2, 6.3, 6.4, and 6.5 slaves.
Now you can manage your licenses from the license master.
1. On the instance that you want to configure as a license slave, log into Splunk
Web and navigate to Settings > Licensing.
3. Switch the radio button from Designate this Splunk instance as the master
license server to Designate a different Splunk instance as the master
license server.
4. Specify the license master to which this license slave should report. You must
provide either an IP address or a hostname and the Splunk management port,
which is 8089 by default.
5. Click Save.
146
6. Restart Splunk Enterprise.
When you first install an Enterprise license on a Splunk Enterprise instance, the
instance becomes the license master for that license. Several default
configurations result:
You can change the set of pools. You can also configure access of license
slaves to stacks.
The following example shows the Settings > Licensing screen for a newly
installed 100 MB Enterprise license.
147
Edit an existing license pool
You can edit a license pool to change the pool's allocation or to change the set of
indexers that have access to the pool.
1. Next to the license pool that you want to edit, click Edit. The Edit license pool
page is displayed.
2. (Optional) Change the allocation for the pool. The allocation is how much of
the stack's overall licensing volume is available for use by the indexers that
access this pool. The allocation can be a specific value, or it can be the entire
amount of indexing volume available in the stack, as long as it is not already
allocated to any other pool.
3. (Optional) Change the indexers that have access to the pool. The options are:
• Any indexer configured as a license slave can access the pool and use
the license allocation within it.
• Only specific indexers can access the pool and use the license allocation
within it. To allow a specific indexer to draw from the pool, click the plus
148
sign next to the name of the indexer in the list of available indexers to
move it into the list of associated indexers.
4. Click Submit.
Before you can create a new license pool from the default Enterprise stack, you
must make some indexing volume available by either editing an existing pool,
such as the auto_generated_pool_enterprise pool, and reducing its allocation,
or deleting the pool entirely. Click Delete next to the pool's name to delete it.
1. Click Add pool toward the bottom of the page. The Create new license pool
page is displayed.
3. Set the allocation for the pool. The allocation is how much of the stack's overall
licensing volume is available for use by the indexers that access this pool. The
allocation can be a specific value, or it can be the entire amount of indexing
volume available in the stack, as long as it is not already allocated to any other
pool.
4. Specify the indexers that have access to the pool. The options are:
• Any indexer configured as a license slave can access the pool and use
the license allocation within it.
• Only specific indexers can access the pool and use the license allocation
within it. To allow a specific indexer to draw from the pool, click the plus
sign next to the name of the indexer in the list of available indexers to
move it into the list of associated indexers.
149
Manage Splunk licenses
Delete a license
If a license expires, you can delete it. To delete a license:
150
master.
5. Stop the instance.
6. Delete the old license files under
/opt/splunk/etc/licenses/enterprise/.
7. Start the instance.
8. Confirm that the instance connects as a slave to the new license
master.
For general information on the Splunk CLI, see "About the CLI".
You can use the CLI to add, edit, list, and remove licenses and license-related
objects. The available commands are:
151
licenses
Remove licenses or license
remove licenser-pools, licenses
pools from a license stack.
License-related objects are:
Object Description
licenser-groups the different license groups you can switch to.
licenser-localslave a local indexer's configuration.
licenser-messages the alerts or warnings about the state of your licenses.
a pool, or virtual license. A stack can be divided into
licenser-pools various pools, with multiple slaves sharing the quota of
each pool.
licenser-slaves all the slaves that have contacted the master.
this object represents a stack of licenses. A stack contains
licenser-stacks
licenses of the same type and are cumulative.
licenses all licenses for this Splunk instance.
Common licenser-related tasks
Managing licenses
To add a new license to the license stack, specify the path to the license file:
List also displays the properties of each license, including the features it enables
(features), the license group and stack it belongs to (group_id, stack_id), the
indexing quote it allows (quota), and the license key that is unique for each
license (license_hash).
If a license expires, you can remove it from the license stack. To remove a
license from the license stack, specify the license's hash:
152
./splunk remove licenses
BM+S8VetLnQEb1F+5Gwx9rR4M4Y91AkIE=781882C56833F36D
You can create a license pool from one or more licenses in a license stack (if you
have an Enterprise license). Basically, a license stack can be carved up into
multiple licenser pools. Each pool can have more than one license slave sharing
the quota of the pool.
To add a license pool to the stack, you need to: name the pool, specify the stack
that you want to add it to, and specify the indexing volume allocated to that pool:
You can also specify a description for the pool and the slaves that are members
of the pool (these are optional).
You can edit the license pool's description, indexing quota, and slaves:
This basically adds a description for the pool, "Test", changes the quota from
10mb to 15mb, adds slaves guid3 and guid4 to the pool (instead of overwriting or
replace guid1 and guid2).
A license slave is a member of one or more license pools. The license slaves
access to license volume is controlled by its license master.
153
To list all the license slaves that have contacted the license master:
To add a license slave, edit the attributes of that local license slave node (specify
the uri of the splunkd license master instance or 'self'):
You can use the list command to view messages (alerts or warnings) about the
state of your licenses.
• Read Types of Splunk software licenses for information about the new
no-enforcement license.
• Read How Splunk Enterprise licensing works for an introduction to Splunk
Enterprise licensing.
Warnings and violations occur when you exceed the maximum daily indexing
volume allowed for your license. Daily indexing volume is measured from
midnight to midnight by the clock on the license master.
If you exceed your licensed daily volume on any one calendar day, you get a
violation warning. If you have 5 or more warnings on an enforced Enterprise
license, or 3 warnings on a Free license, in a rolling 30-day period, you are in
154
violation of your license. Unless you are using a Splunk Enterprise 6.5.0 or later
no-enforcement license, search is disabled for the offending pool(s). Other pools
remain searchable, as long as the total license usage from all pools is less than
the total license quota for the license master.
Search capabilities return when you have fewer than 5 (Enterprise) or 3 (Free)
warnings in the previous 30 days, or when you apply a temporary reset license
(available for Enterprise only). To obtain a reset license, contact your sales
representative. See Install a license.
Note: Summary indexing volume does not count against your license, although
in the event of a license violation, summary indexing halts like any other
noninternal search behavior. Internal indexes (for example, _internal and
_introspection) do not count against your license volume.
If you get a license warning, you have until midnight (going by the time on the
license master) to resolve it before it counts against the total number of warnings
within the rolling 30 day period.
If indexers in a pool exceed the license volume allocated to that pool, you will see
a message in Messages on any page in Splunk Web.
155
Clicking the link in the message takes you to Settings > Licensing, where the
warning displays under the Alerts section of the page. Click a warning to get
more information about it.
About the connection between the license master and license slaves
When you configure a license master instance and add license slaves to it, the
license slaves communicate their usage to the license master every minute. If the
license master is down or unreachable for any reason, the license slave starts a
72 hour timer. If the license slave cannot reach the license master for 72 hours,
search is blocked on the license slave (although indexing continues). Users
cannot search data in the indexes on the license slave until that slave can reach
the license master again.
To find out if a license slave has been unable to reach the license master, look
for an event that contains failed to transfer rows in splunkd.log or search for
it in the _internal index.
To avoid license violations, monitor your license usage and ensure you have
sufficient license volume to support it. If you do not have sufficient license
volume, you need to either increase your license or decrease your indexing
volume.
The distributed management console contains alerts that you can enable,
including one that monitors license usage. See Platform alerts in Monitoring
Splunk Enterprise.
156
Use the License Usage report to see details about and troubleshoot index
volume in your deployment. Read about the license usage report view in the next
chapter.
If Splunk software tells you to correct your license warning before midnight, your
quota is probably already exceeded for the day. This is called a "soft warning."
The daily license quota resets at midnight (at which point the soft warning
becomes a "hard warning"). You have until then to fix your situation and ensure
that you will not go over quota tomorrow, too.
Once data is already indexed, there is no way to un-index data to give you
"wiggle room" back on your license. You need to get additional license room in
one of these ways:
If you cannot do any of these, prevent a warning tomorrow by using less of your
license. Use the the License Usage Report View to learn which data sources are
contributing the most to your quota.
Once you identify a data culprit, decide whether you need all the data it is
emitting. If not, read Route and filter data in the Forwarding Data manual.
Unlike event data, metrics data counts against a license at a fixed 150 bytes per
metric event. Metrics data does not have a separate license. Ingesting metrics
data draws from the same license quota as event data.
157
License usage report view
LURV displays detailed license usage information for your license pool. The
dashboard is logically divided into two parts. One part displays information about
today's license usage and any warning information in the current rolling window.
The other part shows historic license usage during the past 30 days.
For every panel in LURV, you can click the search icon at the bottom left of the
panel to interact with the search.
To access LURV:
Today tab
When you first arrive at LURV, you'll see five panels under the Today tab. These
panels show the status of license usage and the warnings for the day that has
not yet finished. The licenser's day ends at midnight in whichever time zone the
license master is set to.
All the panels in the Today tab query the Splunk REST API.
158
Today's license usage panel
This panel gauges license usage for today, as well as the total daily license
quota across all pools.
This panel shows the license usage for each pool as well as the daily license
quota for each pool.
This panel shows what percentage of the daily license quota has been indexed
by each pool. The percentage is displayed on a logarithmic scale.
This panel shows the warnings, both soft and hard, that each pool has received
in the past 30 days (or since the last license reset key was applied). Read "About
license violations" in this manual to learn more about soft and hard warnings, and
license violations.
For each license slave, this panel shows: the number of warnings, pool
membership, and whether the slave is in violation.
Clicking on the "Previous 30 Days" tab reveals five more panels and several
drop-down options.
All visualizations in these panels limit the number of host, source, source type,
index, pool (any field you split by) that are plotted. If you have more than 10
distinct values for any of these fields, the values after the 10th are labeled
"Other." We've set the maximum number of values plotted to 10 using timechart.
We hope this gives you enough information most of the time without making the
visualizations difficult to read.
159
Split-by: no split, indexer, pool
There are two things you should understand about these four split-by fields:
report acceleration and squashing.
Acceleration for this report is disabled by default. To accelerate the report, click
the link that shows up in the info message when you select one of these split-by
values. You can also find the workflow for accelerating in Settings > Searches
and reports > License usage data cube. See Accelerate reports in the
Reporting Manual.
Note that report acceleration can take up to 10 minutes to start after you select it
for the first time. Then Splunk software takes some amount time to build the
acceleration summary -- typically a few to tens of minutes, depending on the
amount of data being summarized. Only after the acceleration is finished building
will performance improve for these split-by options.
After the first acceleration run, subsequent reports build on what's already there,
keeping the report up-to-date (and the reporting fast). You should have a long
wait only the first time you turn on report acceleration.
160
Squashing
Every indexer periodically reports to license manager stats of the data indexed:
broken down by source, source type, host, and index. If the number of distinct
(source, source type, host, index) tuples grows over the squash_threshold,
Splunk squashes the {host, source} values and only reports a breakdown by
{sourcetype, index}. This is to prevent high memory usage and an unwieldy
number of license_usage.log lines.
Because of squashing on the other fields, only the split-by source type and index
will guarantee full reporting (every byte). Split by source and host do not
guarantee full reporting necessarily, if those two fields represent many distinct
values. Splunk reports the entire quantity indexed, but not the names. So you
lose granularity (that is, you don't know who consumed that amount), but you still
know what the amount consumed is.
LURV tells you, with a warning message in Splunk Web, when squashing has
occurred.
The "Top 5" panel shows both average and maximum daily usage of the top five
values for whatever split by field you've picked from the Split By menu.
Note that this selects the top five average (not peak) values. So, for example, say
you have more than five source types. Source type F is normally much smaller
than the others but has a brief peak. Source type F's max daily usage is very
high, but its average usage might still be low (since it has all those days of very
low usage to bring down its average). Since this panel selects the top five
average values, source type F might still not show up in this view.
Unlike event data, metrics data counts against a license at a fixed 150 bytes per
metric event. Metrics data does not have a separate license. Ingesting metrics
data draws from the same license quota as event data.
161
You can identify metrics data by clicking the Previous 30 days tab and sorting
by index.
Use LURV
Read the next topic for a tip about configuring an alert based on a LURV panel.
Set up an alert
You can turn any of the LURV panels into an alert. For example, say you want to
set up an alert for when license usage reaches 80% of the quota.
Splunk Enterprise comes with several preconfigured alerts that you can enable.
See Enable and configure platform alerts in Monitoring Splunk Enterprise.
Troubleshoot LURV
A lack of results in the panels of the Last 30 days view of the License Usage
Report View indicates that the license master on which you are viewing this page
cannot find events from its own
$SPLUNK_HOME/var/log/splunk/license_usage.log file.
162
adding all indexers to whom the license master is forwarding events as
search peers.
• The license master is not reading (and therefore, indexing) events from its
own $SPLUNK_HOME/var/log/splunk directory. This can happen if the
[monitor://$SPLUNK_HOME/var/log/splunk] default data input is disabled
for some reason.
You might also have a gap in your data if your license master is down at
midnight.
An instance that has both a single-source type license and an Enterprise license
does not always show accurate information.
163
Administer the app key value store
Here are some ways that Splunk apps might use the KV Store:
For information on using the KV store, see app key value store documentation for
Splunk app developers.
The KV store stores your data as key-value pairs in collections. Here are the
main concepts:
• Collections are the containers for your data, similar to a database table.
Collections exist within the context of a given app.
• Records contain each entry of your data, similar to a row in a database
table.
• Fields correspond to key names, similar to the columns in a database
table. Fields contain the values of your data as a JSON file. Although it is
not required, you can enforce data types (number, boolean, time, and
string) for field values.
• _key is a reserved field that contains the unique ID for each record. If you
don't explicitly specify the _key value, the app auto-generates one.
• _user is a reserved field that contains the user ID for each record. This
field cannot be overridden.
164
• Accelerations improve search performance by making searches that
contain accelerated fields return faster. Accelerations store a small portion
of the collection's data set in an easy-to-traverse form.
In a search head cluster, if any node receives a write, the KV store delegates the
write to the KV store captain. The KV store keeps the reads local, however.
System requirements
KV store uses port 8191 by default. You can change the port number in
server.conf's [kvstore] stanza. For information about other ports that Splunk
Enterprise uses, see "System requirements and other deployment considerations
for search head clusters" in the Distributed Search Manual.
For information about other configurations that you can change in KV store, see
the "KV store configuration" section in server.conf.spec.
To use FIPS with KV store, see the "KV store configuration" section in
server.conf.spec.
If you enable FIPS but do not provide the required settings (caCertFile,
sslKeysPath, and sslKeysPassword), KV store does not run. Look for error
messages in splunkd.log and on the console that executes splunk start.
165
Use the KV store
1. Create a collection and optionally define a list of fields with data types
using configuration files or the REST API.
2. Perform create-read-update-delete (CRUD) operations using search
lookup commands and the Splunk REST API.
3. Manage collections using the REST API.
You can monitor your KV store performance through two views in the monitoring
console. One view provides insight across your entire deployment. The other
provides detailed information about KV store operations on each search head.
See KV store dashboards in Monitoring Splunk Enterprise.
Before downgrading Splunk Enterprise to version 7.1 or earlier, you must use the
REST API to resynchronize the KV store.
You can check the status of the KV store using the command line.
166
Resync stale KV store members
If more than half of the members are stale, you can either recreate the cluster or
resync it from one of the members. See Back up KV store for details about
restoring from backup.
To resync the cluster from one of the members, use the following procedure. This
procedure triggers the recreation of the KV store cluster, when all of the
members of current existing KV store cluster resynchronize all data from the
current member (or from the member specified in -source sourceId). The
command to resync the KV store cluster can be invoked only from the node that
is operating as search head cluster captain.
1. Determine which node is currently the search head cluster captain. Use
the CLI command splunk show shcluster-status.
2. Log into the shell on the search head cluster captain node.
3. Run the command splunk resync kvstore [-source sourceId]. The
source is an optional parameter, if you want to use a member other than
the search head cluster captain as the source. SourceId refers to the
GUID of the search head member that you want to use.
4. Enter your admin login credentials.
5. Wait for a confirmation message on the command line.
6. Use the splunk show kvstore-status command to verify that the cluster
is resynced.
If fewer than half of the members are stale, resync each member individually.
1. Stop the search head that has the stale KV store member.
2. Run the command splunk clean kvstore --local.
3. Restart the search head. This triggers the initial synchronization from
other KV store members.
4. Run the command splunk show kvstore-status to verify synchronization.
If you find yourself resyncing KV store frequently because KV store members are
transitioning to stale mode frequently (daily or maybe even hourly), this means
that apps or users are writing a lot of data to the KV store and the operations log
is too small. Increasing the size of the operations log (or oplog) might help.
167
the operations log. The members replicate the newly inserted data from there.
When the operations log reaches its allocation (1 GB by default), it overwrites the
beginning of the oplog. Consider a lookup that is close to the size of the
allocation. The KV store rolls the data (and overwrites starting from the beginning
of the oplog) only after the majority of the members have accessed it, for
example, three out of five members in a KV store cluster. But once that happens,
it rolls, so a minority member (one of the two remaining members in this
example) cannot access the beginning of the oplog. Then that minority member
becomes stale and need to be resynced, which means reading from the entire
collection (which is likely much larger than the operations log).
To decide whether to increase the operations log size, visit the Monitoring
Console KV store: Instance dashboard or use the command line as follows:
While keeping your operations log too small has obvious negative effects (like
members becoming stale), setting an oplog size much larger than your needs
might not be ideal either. The KV store takes the full log size that you allocate
right away, regardless of how much data is actually being written to the log.
Reading the oplog can take a fair bit of RAM, too, although it is loosely bound.
Work with Splunk Support to determine an appropriate operations log size for
your KV store use. The operations log is 1 GB by default.
168
Downgrading Splunk Enterprise
If you use this command and and then restart Splunk before downgrading, run
this command again before downgrading.
Use the splunk backup kvstore command from the search head. On a search
head cluster, back up from the node with the most recent data. This command
creates an archive file in the $SPLUNK_HOME/var/lib/splunk/kvstorebackup
directory.
169
./splunk show kvstore-status
Prerequisites
Use the restore kvstore command to restore the KV store. To restore the KV
store data to the same search head cluster from which it was backed up, use the
following command on each member of the cluster. To restore the KV store data
to a new member being added to the search head cluster, use the following
command to restore the KV store data after you add the member to the cluster.
Use the following procedure to create a new search head cluster with new
Splunk Enterprise instances.
1. Back up the KV store data from a search head in the current search head
cluster.
2. On a search head that will be in the new search head cluster environment,
create the KV store collection using the same collection name as the KV
store data you are restoring.
170
3. Initialize the search head cluster with replication_factor=1
4. Restore the KV store data to the new search head.
5. Run the following command from the CLI:
splunk clean kvstore --cluster
6. Start the Splunk instance and bootstrap with the new search head.
7. After the KV store has been restored onto the new search head, add the
other new search head cluster members.
8. After complete, change the replication_factor on each search head to
the desired replication factor number.
9. Perform a rolling restart of your deployment.
You can check the status of the KV store in the following ways:
On the command line from any KV store member, in $SPLUNK_HOME/bin type the
following command:
See About the CLI for information about using the CLI in Splunk software.
curl -k -u user:pass
https://<host>:<mPort>/services/kvstore/status
171
See Basic Concepts in the REST API User Manual for more information about
the REST API.
The following is a list of possible values for status and replicationStatus and
their definitions. For more information about abnormal statuses for your KV store
members, check mongod.log and splunkd.log for errors and warnings.
KV store
Definition
status
172
splunkd.log on this member, and verify connection to this
member and connection speed.
Down Member is stopped.
Member is removed from the KV store cluster, or is in the
Removed
process of being removed.
Rollback /
Member might have a problem. Check mongod.log and
Recovering /
splunkd.log on this member.
Unknown status
Sample command-line response:
This member:
date : Tue Jul 21 16:42:24 2016
dateSec : 1466541744.143000
disabled : 0
guid :
6244DF36-D883-4D59-AHD3-5276FCB4BL91
oplogEndTimestamp : Tue Jul 21 16:41:12 2016
oplogEndTimestampSec : 1466541672.000000
oplogStartTimestamp : Tue Jul 21 16:34:55 2016
oplogStartTimestampSec : 1466541295.000000
port : 8191
replicaSet : splunkrs
replicationStatus : KV store captain
standalone : 0
status : ready
KV store members:
10.140.137.128:8191
configVersion : 1
electionDate : Tue Jul 21 16:42:02 2016
electionDateSec : 1466541722.000000
hostAndPort : 10.140.134.161:8191
optimeDate : Tue Jul 21 16:41:12 2016
173
optimeDateSec : 1466541672.000000
replicationStatus : KV store captain
uptime : 108
10.140.137.119:8191
configVersion : 1
hostAndPort : 10.140.134.159:8191
lastHeartbeat : Tue Jul 21 16:42:22 2016
lastHeartbeatRecv : Tue Jul 21 16:42:22 2016
lastHeartbeatRecvSec : 1466541742.490000
lastHeartbeatSec : 1466541742.937000
optimeDate : Tue Jul 21 16:41:12 2016
optimeDateSec : 1466541672.000000
pingMs : 0
replicationStatus : Non-captain KV store member
uptime : 107
10.140.136.112:8191
configVersion : -1
hostAndPort : 10.140.133.82:8191
lastHeartbeat : Tue Jul 21 16:42:22 2016
lastHeartbeatRecv : Tue Jul 21 16:42:00 2016
lastHeartbeatRecvSec : 1466541720.503000
lastHeartbeatSec : 1466541742.959000
optimeDate : ZERO_TIME
optimeDateSec : 0.000000
pingMs : 0
replicationStatus : Down
uptime : 0
KV store messages
The KV store logs error and warning messages in internal logs, including
splunkd.log and mongod.log. These error messages post to the bulletin board in
Splunk Web. See What Splunk software logs about itself for an overview of
internal log files.
If you experience migration issues with using the KV store, then the following
lines appear in the mongod.log file:
174
2018-07-17T15:44:12.122-0700 F STORAGE [initandlisten] BadValue:
Invalid value for version, found 3.2, expected '3.6' or '3.4'. Contents
of featureCompatibilityVersion document in admin.system.version: { _id:
"featureCompatibilityVersion", version: "3.2" }. See
http://dochub.mongodb.org/core/3.6-feature-compatibility.
If you see these lines, migrate the KV store manually by using the splunk
migrate migrate-kvstore command.
If you downgrade to Splunk Enterprise version 7.1 from version 7.2, you might
receive the following error in mongod.log:
Before downgrading from Splunk Enterprise version 7.2 to 7.1, resync the KV
store with the following command:
If you use this command and and then restart Splunk before downgrading, run
this command again before downgrading.
175
Updating the IP address of a KV store server can require a resync
If you update the IP address of a KV store server, you might receive the following
error in mongod.log:
To reconfigure the cluster to pick up the new IP address, resync to force the
cluster configuration to refresh:
A manual resync with this command overwrites any local changes on that KV
store server. For more information about manually resyncing a cluster member,
see Why a recovering member might need to resync manually in the Distributed
Search manual.
For more information about resyncing the KV store, see Resync the KV store.
You can monitor your KV store performance through two views in the monitoring
console. The KV store: Deployment dashboard provides information aggregated
across all KV stores in your Splunk Enterprise deployment. The KV store:
Instance dashboard shows performance information about a single Splunk
Enterprise instance running the KV store. See KV store dashboards in Monitoring
Splunk Enterprise.
176
Meet Splunk apps
• Apps generally offer extensive user interfaces that enable you to work
with your data, and they often make use of one or more add-ons to ingest
different types of data.
• Add-ons generally enable the Splunk platform or a Splunk app to ingest
or map a particular type of data.
To an admin user, the difference matters very little as both apps and add-ons
function as tools to help you get data into the Splunk platform and efficiently use
it.
App
An app is an application that runs on the Splunk platform. By default, the Splunk
platform includes one basic app that enables you to work with your data: the
Search and Reporting app. To address additional use cases, you can install
other apps on your instance of Splunk Enterprise. Some apps are free and others
are paid. Examples include Splunk App for Microsoft Exchange, Splunk
Enterprise Security, and Splunk DB Connect. An app might make use of one or
more add-ons to facilitate how it collects or maps particular types of data.
Add-on
177
App and add-on support
Anyone can develop an app or add-on for Splunk software. Splunk and members
of our community create apps and add-ons and share them with other users of
Splunk software via Splunkbase, the online app marketplace. Splunk does not
support all apps and add-ons on Splunkbase. Labels in Splunkbase indicate who
supports each app or add-on.
• The Splunk Support team accepts cases and responds to issues only for
the apps and add-ons which display a Splunk Supported label on
Splunkbase.
• Some developers support their own apps and add-ons. These apps and
add-ons display a Developer Supported label on Splunkbase.
• The Splunk developer community supports apps and add-ons which
display a Community Supported label on Splunkbase.
By default, Splunk provides the Search and Reporting app. This interface
provides the core functionality of Splunk and is designed for general-purpose
use. This app displays at the top of your Home Page when you first log in and
provides a search field so that you can immediately starting using it.
Once in the Search and Reporting app (by running a search or clicking on the
app in the Home page) you can use the menu bar options to select the following:
178
• Search: Search your indexes. See the "Using Splunk Search" in the
Search Tutorial for more information.
• Pivot: Use data models quickly design and generate tables, charts, and
visualizations for your data. See the Pivot Manual for more information.
• Reports: Turn your searches into reports. "Saving and sharing reports" in
the Search Tutorial for more information.
• Alerts: Set up alerts for your Splunk searches and reports. See the
Alerting Manual for more information
• Dashboards: Leverage predefined dashboards or create your own. See
Dashboards and Visualizations manual.
You can set a default app for all users with a specific role. For example, you
could send all users with the "user" role to an app you created, and all admin
users to the Monitoring Console.
You can specify a default app for all users to land in when they log in. For
example, to set the Search app as the global default:
179
1. Create or edit
$SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf (*nix) or
%SPLUNK_HOME%\etc\apps\user-prefs\local\user-prefs.conf (Windows).
2. Specify
[general_default]
default_namespace = search
3. Restart Splunk Enterprise for the change to take effect.
See user-prefs.conf.spec.
In most cases, you should set default apps by role. But if your use case requires
you to set a default app for a specific user, you can do this through Splunk Web.
To make the Search app the default landing app for a user:
• The user does not have permission to access their default app, or
• The default app does not exist (for example, if it is typed incorrectly in
user-prefs.conf).
See Manage app and add-on configurations and properties for information about
managing permissions on an app.
180
Splunk Enterprise home page.
If your Splunk Enterprise server or your client machine are connected to the
Internet, you can navigate to the app browser from the home page.
• You can click the + sign below your last installed app to go directly to the
app browser.
• You can also click the gear next to Apps to go to the apps manager page.
Click Browse more apps to go to the app browser.
Important: If Splunk Web is located behind a proxy server, you might have
trouble accessing Splunkbase. To solve this problem, you need to set the
HTTP_PROXY environment variable, as described in Use Splunk Web with a
reverse proxy configuration.
If your Splunk Enterprise server and client do not have Internet connectivity, you
must download apps from Splunkbase and copy them over to your server:
1. From a computer connected to the Internet, browse Splunkbase for the app or
add-on you want.
181
5. Untar and ungzip your app or add-on, using a tool like tar -xvf (on *nix) or
WinZip (on Windows). Note that Splunk apps and add-ons are packaged with a
.SPL extension although they are just tarred and gzipped. You may need to force
your tool to recognize this extension.
6. You may need to restart Splunk Enterprise, depending on the contents of the
app or add-on.
7. Your app or add-on is now installed and will be available from Splunk Home (if
it has a web UI component).
For more detailed app and add-on deployment information, see your specific
Splunk app documentation, or see Where to install Splunk add-ons in the Splunk
Add-ons manual.
Prerequisites
You must have an existing Splunk platform deployment on which to install Splunk
apps and add-ons.
Deployment methods
There are several ways to deploy apps and add-ons to the Splunk platform. The
correct deployment method to use depends on the following characteristics of
your specific Splunk software deployment:
182
From your home page in Splunk Web, find the data onboarding guides by
clicking Add Data. You can either search for a data source or explore different
categories of data sources. After you select your data source, you select a
deployment scenario. From there you can view diagrams and high-level steps to
set up and to configure your data source.
Splunk Web links to documentation that explains how to set up and configure
your data source in greater detail. You can find all the Guided Data Onboarding
manuals by clicking the Add data tab on the Splunk Enterprise Documentation
site.
Deployment architectures
Single-instance deployment
Some apps currently do not support installation through Splunk Web. Make sure
to check the installation instructions for your specific app prior to installation.
Distributed deployment
You can deploy apps in a distributed environment using the following methods:
183
updates to search heads, indexers, and forwarders. See About
deployment server and forwarder management in Updating Splunk
Enterprise Instances.
• Chef
• Puppet
• Salt
• Windows configuration tools
For the most part, you must install Splunk apps on search heads, indexers, and
forwarders. To determine the Splunk Enterprise components on which you must
install the app, see the installation instructions for the specific app.
You deploy apps to both indexer and search head cluster members using the
configuration bundle method.
To deploy apps to a search head cluster, you must use the deployer. The
deployer is a Splunk Enterprise instance that distributes apps and configuration
updates to search head cluster members. The deployer cannot be a search head
cluster member and must exist outside the search head cluster. See Use the
deployer to distribute apps and configuration updates in the Distributed Search
manual.
Caution: Do not deploy a configuration bundle to a search head cluster from any
instance other then the deployer. If you run the apply shcluster-bundle
command on a non-deployer instance, such as a cluster member, the command
deletes all existing apps and user-generated content on all search head cluster
members!
184
Indexer clusters
To deploy apps to peer nodes (indexers) in an indexer cluster, you must first
place the apps in the proper location on the indexer cluster master, then use the
configuration bundle method to distribute the apps to peer nodes. You can apply
the configuration bundle to peer nodes using Splunk Web or the CLI. For more
information, see Update common peer configurations and apps in Managing
Indexers and Clusters of Indexers.
While you cannot use the deployment server to deploy apps to peer nodes, you
can use it to distribute apps to the indexer cluster master. For more information,
see Use deployment server to distribute apps to the master in Managing
Indexers and Clusters of Indexers.
If you want to deploy an app or add-on to Splunk Cloud, see Install apps in your
Splunk Cloud deployment.
You can install and enable a limited selection of add-ons to configure new data
inputs on your instance of Splunk Light. See Configure an add-on to add data in
the Getting Started Manual for Splunk Light.
Note: Occasionally you may save objects to add-ons as well, though this is not
common. Apps and add-ons are both stored in the apps directory. On the rare
instance that you would need to save objects to an add-on, you would manage
the add-on the same as described for apps in this topic.
Any user logged into Splunk Web can create and save knowledge objects to the
user's directory under the app the user is "in" (assuming sufficient permissions).
This is the default behavior -- whenever a user saves an object, it goes into the
user's directory in the currently running app. The user directory is located at
$SPLUNK_HOME/etc/users/<user_name>/<app_name>/local. Once the user has
185
saved the object in that app, it is available only to that user when they are in that
app unless they do one of the following:
• Promote the object so that it is available to all users who have access
• Restrict the object to specific roles or users (still within the app context)
• Mark the object as globally available to all apps, add-ons and users
(unless you've explicitly restricted it by role/user)
Note: Users must have write permissions for an app or add-on before they can
promote objects to that level.
Users can share their Splunk knowledge objects with other users through the
Permissions dialog. This means users who have read permissions in an app or
add-on can see the shared objects and use them. For example, if a user shares
a saved search, other users can see that saved search, but only within the app in
which the search was created. So if you create a saved search in the app
"Fflanda" and share it, other users of Fflanda can see your saved search if they
have read permission for Fflanda.
Users with write permission can promote their objects to the app level. This
means the objects are copied from their user directory to the app's directory --
from:
$SPLUNK_HOME/etc/users/<user_name>/<app_name>/local/
to:
$SPLUNK_HOME/etc/apps/<app_name>/local/
Users can do this only if they have write permission in the app.
Finally, upon promotion, users can decide if they want their object to be available
globally, meaning all apps are able to see it. Again, the user must have
permission to write to the original app. It's easiest to do this in Splunk Web, but
you can also do it later by moving the relevant object into the desired directory.
To make globally available an object "A" (defined in "B.conf") that belongs to user
"C" in app "D":
186
1. Move the stanza defining the object A from
$SPLUNK_HOME/etc/users/C/D/B.conf into
$SPLUNK_HOME/etc/apps/D/local/B.conf.
2. Add a setting, export = system, to the object A's stanza in the app's
local.meta file. If the stanza for that object doesn't already exist, you can just
add one.
For example, to promote an event type called "rhallen" created by a user named
"fflanda" in the *Nix app so that it is globally available:
[eventtypes/rhallen]
export = system
to $SPLUNK_HOME/etc/apps/unix/metadata/local.meta.
Note: Adding the export = system setting to local.meta isn't necessary when
you're sharing event types from the Search app, because it exports all of its
events globally by default.
The knowledge objects discussed here are limited to those that are subject to
access control. These objects are also known as app-level objects and can be
viewed by selecting Apps > Manage Apps from the User menu bar. This page is
available to all users to manage any objects they have created and shared.
These objects include:
There are also system-level objects available only to users with admin privileges
(or read/write permissions on the specific objects). These objects include:
187
• Users
• Roles
• Auth
• Distributed search
• Inputs
• Outputs
• Deployment
• License
• Server settings (for example: host name, port, etc)
Important: If you add an input, Splunk adds that input to the copy of inputs.conf
that belongs to the app you're currently in. This means that if you navigated to
your app directly from Search, your input will be added to
$SPLUNK_HOME/etc/apps/search/local/inputs.conf, which might not be the
behavior you desire.
When you add knowledge to Splunk, it's added in the context of the app you're in
when you add it. When Splunk is evaluating configurations and knowledge, it
evaluates them in a specific order of precedence, so that you can control what
knowledge definitions and configurations are used in what context. Refer to
About configuration files for more information about Splunk configuration files
and the order of precedence.
• For an overview of apps and add-ons, refer to What are apps and
add-ons? in this manual.
• For more information about app and add-on permissions, refer to App
architecture and object ownership in this manual.
188
• To learn more about how to create your own apps and add-ons, refer to
Developing Views and Apps for Splunk Web.
You can use Splunk Web to view the objects in your Splunk platform deployment
in the following ways:
• To see all the objects for all the apps and add-ons on your system at
once: Settings > All configurations.
• To see all the saved searches and report objects: Settings > Searches
and reports.
• To see all the event types: Settings > Event types.
• To see all the field extractions: Settings > Fields.
You can:
• View and manipulate the objects on any page with the sorting arrows
• Filter the view to see only the objects from a given app or add-on, owned
by a particular user, or those that contain a certain string, with the App
context bar.
Use the Search field on the App context bar to search for strings in fields. By
default, the Splunk platform searches for the string in all available fields. To
search within a particular field, specify that field. Wildcards are supported.
Note: For information about the individual search commands on the Search
command page, refer to the Search Reference Manual.
189
Manage apps and add-ons on standalone instances
Splunk updates the app or add-on based on the information found in the
installation package.
Note: If you are running Splunk Free, you do not have to provide a username
and password.
1. (Optional) Remove the app or add-on's indexed data. Typically, the Splunk
platform does not access indexed data from a deleted app or add-on.
However, you can use the Splunk CLI clean command to remove indexed
data from an app before deleting the app. See Remove data from indexes
with the CLI command.
2. Delete the app and its directory. The app and its directory are typically
located in $SPLUNK_HOME/etc/apps/<appname>. You can run the following
command in the CLI:
./splunk remove app [appname] -auth <username>:<password>
3. You may need to remove user-specific directories created for your app or
add-on by deleting any files found here:
$SPLUNK_HOME/etc/users/*/<appname>
4. Restart the Splunk platform.
190
Managing app and add-on configurations and
properties
You can manage the configurations and properties for apps installed in your
Splunk Enterprise instance from the Apps menu. Click on Apps in the User bar
to select one of your installed apps or manage an app. From the Manage Apps
page, you can do the following:
The edits you make to configuration and properties depend on whether you are
the owner of the app or a user.
Select Apps > Manage Apps then click Edit properties for the app or add-on
you want to edit. You can make the following edits for apps installed in this
Splunk Enterprise instance.
• Name: Change the display name of the app or add-on in Splunk Web.
• Visible: Apps with views should be visible. Add-ons, which often do not
have a view, should disable the visible property.
191
• Upload asset: Use this field to select a local file asset files, such as an
HTML, JavaScript, or CSS file that can be accessed by the app or add-on.
You can only upload one file at a time from this panel.
Refer to Develop Splunk apps on the Splunk Developer Portal for details on the
configuration and properties of apps and add-ons.
You can configure Splunk Enterprise to check Splunkbase for updates to an app
or add-on. By default, checking for updates is enabled. You can disable checking
for updates for an app by editing this property from Settings > Apps > Edit
properties.
However, if this property is not available in Splunk Web, you can also manually
edit the apps app.conf file to disable checking for updates. Create or edit the
following stanza in $SPLUNK_HOME/etc/apps/<app_name>/local/app.conf to
disable checking for updates:
[package]
check_for_updates = 0
Note: Edit the local version of app.conf, not the default version. This avoids
overriding your setting with the next update of the app.
192
Manage users
Create users
About roles
193
• can_delete -- This role allows the user to delete by keyword. This
capability is necessary when using the delete search operator.
Note Do not edit the predefined roles. Instead, create custom roles that inherit
from the built-in roles, and modify the custom roles as required.
For detailed information on roles and how to assign users to roles, see the
chapter "Users and role-based access control" in the Securing Splunk Enterprise
manual.
To locate an existing user or role in Splunk Web, use the Search bar at the top of
the Users or Roles page in the Access Controls section by selecting Settings >
Access Controls. Wildcards are supported. By default Splunk Enterprise
searches in all available fields for the string that you enter. To search a particular
field, specify that field. For example, to search only email addresses, type
"email=<email address or address fragment>:, or to search only the "Full name"
field, type "realname=<name or name fragment>. To search for users in a given
role, use "roles=".
The user's locale also affects how dates, times, numbers, etc., are formatted, as
different countries have different standards for formatting these entities.
de_DE
194
en_GB
en_US
fr_FR
it_IT
ja_JP
ko_KR
zh_CN
zh_TW
The locale that Splunk uses for a given session can be changed by modifying the
url that you use to access Splunk. Splunk urls follow the form
http://host:port/locale/.... For example, when you access Splunk to log in,
the url may appear as http://hostname:8000/en-US/account/login for US
English. To use British English settings, you can change the locale string to
http://hostname:8000/en-GB/account/login. This session then presents and
accepts timestamps in British English format for its duration.
Requesting a locale for which the Splunk interface has not been localized results
in the message: Invalid language Specified.
Refer to "Translate Splunk" in the Developer Manual for more information about
localizing Splunk.
195
Configure user session timeouts
The amount of time that elapses before a Splunk user's session times out
depends on the interaction among three timeout settings:
The splunkweb and splunkd timeouts determine the maximum idle time in the
interaction between browser and Splunk. The browser session timeout
determines the maximum idle time in interaction between user and browser.
The splunkweb and splunkd timeouts generally have the same value, as the
same field sets both of them. To set the timeout in Splunk Web:
This sets the user session timeout value for both splunkweb and splunkd. Initially,
they share the same value of 60 minutes. They will continue to maintain identical
values if you change the value through Splunk Web.
If, for some reason, you need to set the timeouts for splunkweb and splunkd to
different values, you can do so by editing their underlying configuration files,
web.conf (tools.sessions.timeout attribute) and server.conf (sessionTimeout
attribute). For all practical purposes, there's no reason to give them different
values. In any case, if the user is using SplunkWeb (splunkweb) to access the
Splunk instance (splunkd), the smaller of the two timeout attributes prevails. So,
if tools.sessions.timeout in web.conf has a value of "90" (minutes), and
sessionTimeout in server.conf has a value of "1h" (1 hour; 60 minutes), the
session will timeout after 60 minutes.
In addition to setting the splunkweb/splunkd session value, you can also specify
the timeout for the user browser session by editing the ui_inactivity_timeout
value in web.conf. The Splunk browser session will time out once this value is
reached. The default is 60 minutes. If ui_inactivity_timeout is set to less than
1, there's no timeout -- the session will stay alive while the browser is open.
196
The countdown for the splunkweb/splunkd session timeout does not begin until
the browser session reaches its timeout value. So, to determine how long the
user has before timeout, add the value of ui_inactivity_timeout to the smaller
of the timeout values for splunkweb and splunkd. For example, assume the
following:
The user session stays active for 25m (15m+10m). After 25 minutes of no
activity, the user will be prompted to login again.
197
Configure Splunk Enterprise to use proxies
How it works
When a client (splunkd) sends a request to the HTTP proxy server, the forward
proxy server validates the request.
• If a request is not valid, the proxy rejects the request and the client
receives an error or is redirected.
• If a request is valid, the forward proxy checks whether the requested
information is cached.
♦ If a cached copy is available, the forward proxy serves the cached
information.
♦ If the requested information is not cached, the request is sent to an
actual content server which sends the information to the forward
proxy. The forward proxy then relays the response to the client.
198
• Apache Server 2.4
Note: TLS Proxying is currently not supported, the proxy server must be
configured to listen on a non-SSL port.
Note: Splunk Enterprise supports the HTTP CONNECT method for HTTPS
requests. TLS proxying is not supported, and the proxy server cannot listen on
an SSL port.
2. Extract and install it on the machine that will run the proxy server. The
following example compiles the server from source.
gzip -d httpd-2.4.25.tar.gz
199
tar xvf httpd-2.4.25.tar
cd httpd-NN
./configure --prefix=$PROXY_HOME
make install
3. Customize the the Apache server httpd.conf file.
Listen = 8000 <IP addresses and ports that the server listens to>
ProxyRequests = On < Enables forward (standard) proxy requests>
SSLProxyEngine = On <This directive toggles the usage of the SSL/TLS
Protocol Engine for proxy>
AllowCONNECT = 443 <Ports that are allowed to CONNECT through the proxy>
Additional configuration (optional)
Before you configure or disable these values, please read the Apache
documentation for additional information.
2. Extract and install it on the machine that will run the proxy server. The
following example compiles the server from source.
$ gzip -d httpd-2.2.32.tar.gz
$ tar xvf httpd-2.2.32.tar
$ cd httpd-NN
$ ./configure --prefix="PROXY_HOME" --enable-ssl --enable-proxy
--enable-proxy-connect --enable-proxy-http
$ make install
3. Customize the Apache server's httpd.conf file:
Listen 8000 <This is the list of IP addresses and ports that the server
listens to>
ProxyRequests = On <Enables forward (standard) proxy requests>
200
SSLProxyEngine = On <This directive toggles the usage of the SSL/TLS
Protocol Engine for proxy>
AllowCONNECT 443 <Ports that are allowed to CONNECT through the proxy>
Additional configuration (optional)
Before you modify or disable these settings in your environment, please read the
Apache documentation for additional information.
2. Extract and install the download on the machine that will run the proxy server.
The following example compiles Squid server 3.5 from source.
acl localnet src = <configure all possible internal network ports, a new
line for each port>
acl SSL_ports = <configure all acl SSL_ports, a new line for each port>
acl CONNECT method CONNECT <ACL for CONNECT method>
http_port 8000 <Port on which the Squid server will listen for requests>
Additional configuration (optional)
Before you configure or disable these settings in your environment, please read
the Squid documentation for additional information.
201
sslproxy_cert_error deny all <Use this ACL to bypass server certificate
validation errors>
sslproxy_flags DONT_VERIFY_PEER <Various flags modifying the use of SSL
while proxying https URLs>
hosts_file PROXY_HOME/hosts <Location of the host-local IP name-address
associations database>
To set up a proxy server for splunkd, you can either configure Splunk's proxy
variables in server.conf or configure the REST endpoints.
[proxyConfig]
http_proxy = <string that identifies the server proxy. When set, splunkd
sends all HTTP requests through this proxy server. The default value is
unset.>
https_proxy = <string that identifies the server proxy. When set,
splunkd sends all HTTPS requests through the proxy server defined here.
If not set, splunkd uses the proxy defined in http_proxy. The default
value is unset.>
no_proxy = <string that identifies the no proxy rules. When set, splunkd
uses the [no_proxy] rules to decide whether the proxy server needs to be
bypassed for matching hosts and IP Addresses. Requests going to
localhost/loopback address are not proxied. Default is "localhost,
127.0.0.1, ::1">
202
Use REST endpoints to configure splunkd to work with your
server proxy
You can also configure splunkd to work with your HTTP proxy server by
modifying the /services/server/httpsettings/proxysettings REST endpoint.
To set variables using a REST endpoint, you must have the edit_server
capability.
curl -k /services/server/httpsettings/proxysettings/proxyConfig
Delete the stanza:
curl -k -X DELETE
/services/server/httpsettings/proxysettings/proxyConfig
For more details and example requests and responses, see
server/httpsettings/proxysettings and
server/httpsettings/proxysettings/proxyConfig in the REST API Reference.
To use the proxy server for communication in an indexer cluster or search head
cluster, update the following additional settings in server.conf.
[clustering]
register_replication_address = <IP address, or fully qualified
machine/domain name. This is the address on which a slave will be
available for accepting replication data. This is useful in the cases
where a slave host machine has multiple interfaces and only one of them
can be reached by another splunkd instance>
Only valid for mode=slave
[shclustering]
register_replication_address = <IP address, or fully qualified
machine/domain name. This is the address on which a member will be
203
available for accepting replication data. This is useful in the cases
where a member host machine has multiple interfaces and only one of them
can be reached by another splunkd instance.>
Points to Remember
2. Verify your proxy settings for accuracy and make sure they comply with your
organization's network policies.
3. For performance issues with the proxy server, see the performance tuning tips
below.
If you have a large number of clients communicating through the proxy server,
you might see a performance impact for those clients. In the case of performance
impact:
204
Performance Profiling with Squid Server
If you have a large number of clients communicating through the proxy server,
you might see a performance impact for those clients. Please make sure that the
proxy server is adequately provisioned in terms of CPU & Memory resources.
Please check the Squid profiling documentation for additional information.
Note: The App Manager is not supported for use with a proxy server, if you use a
proxy server with Splunk Web, you must download and update apps manually.
root_endpoint=/lzone
For a Apache proxy server, you would then make it visible to the proxy by
mapping it in httpd.conf. Please check the documentation for additional
information.
#Adjusts the URL in HTTP response headers sent from a reverse proxied
server
ProxyPassReverse /lzone http://splunkweb.splunk.com:8000/lzone
205
Meet the Splunk AMI
Already started a copy of the Splunk Enterprise AMI on the AWS Marketplace?
Then you have an instance of Splunk Enterprise running as the Splunk user. It
will start when the machine starts.
206
• Paste the public IP into a new browser tab (do not hit enter yet).
• Append :8000 to the end of the IP address.
• Hit enter.
• Log into Splunk Enterprise with the credentials:
♦ username: admin
♦ password: <instance id from management console>
• On the next screen, set a new password.
Next tasks
• Follow the Search Tutorial, which steps you through uploading a file,
running basic searches, and generating reports.
• Learn about knowledge objects in the Knowledge Manager Manual.
• See Splunk administration: the big picture in the Admin Manual for an
overview of tasks in Splunk Enterprise and where you can find more
information about them.
Upgrade
See "How to upgrade Splunk" in the Installation Manual. Be sure to run a backup
before you begin the upgrade.
Get help
207
Configuration file reference
alert_actions.conf
The following are the spec and example files for alert_actions.conf.
alert_actions.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values for configuring
global
# saved search actions in alert_actions.conf. Saved searches are
configured
# in savedsearches.conf.
#
# There is an alert_actions.conf in $SPLUNK_HOME/etc/system/default/.
# To set custom configurations, place an alert_actions.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# alert_actions.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
208
maxresults = <integer>
* Set the global maximum number of search results sent via alerts.
* Defaults to 100.
hostname = [protocol]<host>[:<port>]
* Sets the hostname used in the web link (url) sent in alerts.
* This value accepts two forms.
* hostname
examples: splunkserver, splunkserver.example.com
* protocol://hostname:port
examples: http://splunkserver:8000,
https://splunkserver.example.com:443
* When this value is a simple hostname, the protocol and port which
are configured within splunk are used to construct the base of
the url.
* When this value begins with 'http://', it is used verbatim.
NOTE: This means the correct port must be specified if it is not
the default port for http or https.
* This is useful in cases when the Splunk server is not aware of
how to construct an externally referenceable url, such as SSO
environments, other proxies, or when the Splunk server hostname
is not generally resolvable.
* Defaults to current hostname provided by the operating system,
or if that fails, "localhost".
* When set to empty, default behavior is used.
ttl = <integer>[p]
* Optional argument specifying the minimum time to live (in seconds)
of the search artifacts, if this action is triggered.
* If p follows integer, then integer is the number of scheduled periods.
* If no actions are triggered, the artifacts will have their ttl
determined
by the "dispatch.ttl" attribute in savedsearches.conf.
* Defaults to 10p
* Defaults to 86400 (24 hours) for: email, rss
* Defaults to 600 (10 minutes) for: script
* Defaults to 120 (2 minutes) for: summary_index, populate_lookup
maxtime = <integer>[m|s|h|d]
* The maximum amount of time that the execution of an action is allowed
to
take before the action is aborted.
* Use the d, h, m and s suffixes to define the period of time:
d = day, h = hour, m = minute and s = second.
For example: 5d means 5 days.
* Defaults to 5m for everything except rss.
* Defaults to 1m for rss.
track_alert = [1|0]
* Indicates whether the execution of this action signifies a trackable
alert.
209
* Defaults to 0 (false).
command = <string>
* The search command (or pipeline) which is responsible for executing
the action.
* Generally the command is a template search pipeline which is realized
with values from the saved search - to reference saved search
field values wrap them in dollar signs ($).
* For example, to reference the savedsearch name use $name$. To
reference the search, use $search$
is_custom = [1|0]
* Specifies whether the alert action is based on the custom alert
actions framework and is supposed to be listed in the search UI.
payload_format = [xml|json]
* Configure the format the alert script receives the configuration via
STDIN.
* Defaults to "xml"
label = <string>
* For custom alert actions: Define the label shown in the UI. If not
specified, the stanza name will be used instead.
description = <string>
* For custom alert actions: Define the description shown in the UI.
icon_path = <string>
* For custom alert actions: Define the icon shown in the UI for the
alert
action. The path refers to appserver/static within the app where the
alert action is defined in.
forceCsvResults = auto|<bool>
* If set to a true boolean, any saved search that includes this action
will
always store results in CSV format, instead of the internal SRS
format.
* If set to a false boolean, results will always be serialized using
the
internal SRS format.
* If set to "auto", results will be serialized as CSV if the 'command'
setting
in this stanza starts with "sendalert" or contains the string
"$results.file$".
* Defaults to "auto".
alert.execute.cmd = <string>
* For custom alert actions: Explicitly specify the command to be
executed
when the alert action is triggered. This refers to a binary or script
in the bin folder of the app the alert action is defined in, or to a
210
path pointer file, also located in the bin folder.
* If a path pointer file (*.path) is specified, the contents of the file
is read and the result is used as the command to be executed.
Environment variables in the path pointer file are substituted.
* If a python (*.py) script is specified it will be prefixed with the
bundled python interpreter.
alert.execute.cmd.arg.<n> = <string>
* Provide additional arguments to the alert action execution command.
Environment variables are substituted.
################################################################################
# EMAIL: these settings are prefaced by the [email] stanza name
################################################################################
[email]
from = <string>
* Email address from which the alert originates.
* Defaults to splunk@$LOCALHOST.
to = <string>
* The To email address receiving the alert.
cc = <string>
* Any cc email addresses receiving the alert.
bcc = <string>
* Any bcc email addresses receiving the alert.
message.report = <string>
* Specify a custom email message for scheduled reports.
* Includes the ability to reference attributes from the result,
saved search, or job
message.alert = <string>
* Specify a custom email message for alerts.
* Includes the ability to reference attributes from result,
saved search, or job
subject = <string>
* Specify an alternate email subject if useNSSubject is false.
* Defaults to SplunkAlert-<savedsearchname>.
211
subject.alert = <string>
* Specify an alternate email subject for an alert.
* Defaults to SplunkAlert-<savedsearchname>.
subject.report = <string>
* Specify an alternate email subject for a scheduled report.
* Defaults to SplunkReport-<savedsearchname>.
useNSSubject = [1|0]
* Specify whether to use the namespaced subject (i.e subject.report) or
subject.
footer.text = <string>
* Specify an alternate email footer.
* Defaults to "If you believe you've received this email in error,
please see your Splunk administrator.\r\n\r\nsplunk > the engine for
machine data."
format = [table|raw|csv]
* Specify the format of inline results in the email.
* Accepted values: table, raw, and csv.
* Previously accepted values plain and html are no longer respected
and equate to table.
* To make emails plain or html use the content_type attribute.
* Default: table
include.results_link = [1|0]
* Specify whether to include a link to the results.
include.search = [1|0]
* Specify whether to include the search that caused an email to be sent.
include.trigger = [1|0]
* Specify whether to show the trigger condition that caused the alert to
fire.
include.trigger_time = [1|0]
* Specify whether to show the time that the alert was fired.
include.view_link = [1|0]
* Specify whether to show the title and a link to enable the user to
edit
the saved search.
content_type = [html|plain]
* Specify the content type of the email.
* plain sends email as plain text
* html sends email as a multipart email that include both text and
html.
sendresults = [1|0]
* Specify whether the search results are included in the email. The
212
results can be attached or inline, see inline (action.email.inline)
* Defaults to 0 (false).
inline = [1|0]
* Specify whether the search results are contained in the body of the
alert
email.
* If the events are not sent inline, they are attached as a csv text.
* Defaults to 0 (false).
priority = [1|2|3|4|5]
* Set the priority of the email as it appears in the email client.
* Value mapping: 1 highest, 2 high, 3 normal, 4 low, 5 lowest.
* Defaults to 3.
mailserver = <host>[:<port>]
* You must have a Simple Mail Transfer Protocol (SMTP) server available
to send email. This is not included with Splunk.
* Specifies the SMTP mail server to use when sending emails.
* <host> can be either the hostname or the IP address.
* Optionally, specify the SMTP <port> that Splunk should connect to.
* When the "use_ssl" attribute (see below) is set to 1 (true), you
must specify both <host> and <port>.
(Example: "example.com:465")
* Defaults to $LOCALHOST:25.
use_ssl = [1|0]
* Whether to use SSL when communicating with the SMTP server.
* When set to 1 (true), you must also specify both the server name or
IP address and the TCP port in the "mailserver" attribute.
* Defaults to 0 (false).
use_tls = [1|0]
* Specify whether to use TLS (transport layer security) when
communicating with the SMTP server (starttls)
* Defaults to 0 (false).
auth_username = <string>
* The username to use when authenticating with the SMTP server. If this
is
not defined or is set to an empty string, no authentication is
attempted.
NOTE: your SMTP server might reject unauthenticated emails.
* Defaults to empty string.
auth_password = <password>
* The password to use when authenticating with the SMTP server.
Normally this value will be set when editing the email settings,
however
you can set a clear text password here and it will be encrypted on
the
next Splunk restart.
213
* Defaults to empty string.
sendpdf = [1|0]
* Specify whether to create and send the results as a PDF.
* Defaults to 0 (false).
sendcsv = [1|0]
* Specify whether to create and send the results as a csv file.
* Defaults to 0 (false).
pdfview = <string>
* Name of view to send as a PDF
reportPaperSize = [letter|legal|ledger|a2|a3|a4|a5]
* Default paper size for PDFs
* Accepted values: letter, legal, ledger, a2, a3, a4, a5
* Defaults to "letter".
reportPaperOrientation = [portrait|landscape]
* Paper orientation: portrait or landscape
* Defaults to "portrait".
reportIncludeSplunkLogo = [1|0]
* Specify whether to include a Splunk logo in Integrated PDF Rendering
* Defaults to 1 (true)
reportCIDFontList = <string>
* Specify the set (and load order) of CID fonts for handling
Simplified Chinese(gb), Traditional Chinese(cns),
Japanese(jp), and Korean(kor) in Integrated PDF Rendering.
* Specify in a space-separated list
* If multiple fonts provide a glyph for a given character code, the
glyph
from the first font specified in the list will be used
* To skip loading any CID fonts, specify the empty string
* Defaults to "gb cns jp kor"
reportFileName = <string>
* Specify the name of attached pdf or csv
* Defaults to "$name$-$time:%Y-%m-%d$"
width_sort_columns = <bool>
* Whether columns should be sorted from least wide to most wide left to
right.
* Valid only if format=text
* Defaults to true
preprocess_results = <search-string>
* Supply a search string to Splunk to preprocess results before emailing
them. Usually the preprocessing consists of filtering out unwanted
internal fields.
* Defaults to empty string (no preprocessing)
214
pdf.footer_enabled = [1 or 0]
* Set whether or not to display footer on PDF.
* Defaults to 1.
pdf.header_enabled = [1 or 0]
* Set whether or not to display header on PDF.
* Defaults to 1.
pdf.logo_path = <string>
* Define pdf logo by syntax <app>:<path-to-image>
* If set, PDF will be rendered with this logo instead of Splunk one.
* If not set, Splunk logo will be used by default
* Logo will be read from
$SPLUNK_HOME/etc/apps/<app>/appserver/static/<path-to-image> if <app>
is provided.
* Current app will be used if <app> is not provided.
pdf.header_left = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the left side of header.
* Nothing will be display if this option is not been set or set to
none
* Defaults to None, nothing will be displayed on this position.
pdf.header_center = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the center of header.
* Nothing will be display if this option is not been set or set to
none
* Defaults to description
pdf.header_right = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the right side of header.
* Nothing will be display if this option is not been set or set to
none
* Defaults to None, nothing will be displayed on this position.
pdf.footer_left = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the left side of footer.
* Nothing will be display if this option is not been set or set to
none
* Defaults to logo
pdf.footer_center = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the center of footer.
* Nothing will be display if this option is not been set or set to
none
* Defaults to title
pdf.footer_right = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the right side of footer.
* Nothing will be display if this option is not been set or set to
none
215
* Defaults to timestamp,pagination
pdf.html_image_rendering = <bool>
* Whether images in HTML should be rendered.
* If enabling rendering images in HTML breaks the pdf for whatever
reason,
* it could be disabled by setting this flag to False,
* so the old HTML rendering will be used.
* Defaults to True.
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Used exclusively for the email alert action and the sendemail search
command
* The default can vary. See the sslVersions setting in
* $SPLUNK_HOME/etc/system/default/alert_actions.conf for the current
default.
sslVerifyServerCert = true|false
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
* If this is set to true, make sure
'server.conf/[sslConfig]/sslRootCAPath'
has been set correctly.
* Used exclusively for the email alert action and the sendemail search
command
* Default is false.
216
* If there is no match, assume that Splunk is not authenticated against
this
server.
* 'sslVerifyServerCert' must be set to true for this setting to work.
* Used exclusively for the email alert action and the sendemail search
command
################################################################################
# RSS: these settings are prefaced by the [rss] stanza
################################################################################
[rss]
items_count = <number>
* Number of saved RSS feeds.
* Cannot be more than maxresults (in the global settings).
* Defaults to 30.
################################################################################
# script: Used to configure any scripts that the alert triggers.
################################################################################
[script]
filename = <string>
* The filename, with no path, of the script to trigger.
* The script should be located in: $SPLUNK_HOME/bin/scripts/
* For system shell scripts on Unix, or .bat or .cmd on windows, there
are no further requirements.
* For other types of scripts, the first line should begin with a #!
marker, followed by a path to the interpreter that will run the
script.
217
* Example: #!C:\Python27\python.exe
* Defaults to empty string.
################################################################################
# lookup: These settings are prefaced by the [lookup] stanza. They
enable the
Splunk software to write scheduled search results to a new or
existing
CSV lookup file.
################################################################################
[lookup]
filename = <string>
* The filename, with no path, of the CSV lookup file. Filename must end
with ".csv".
* If this file does not yet exist, the Splunk software creates it on
the next
scheduled run of the search. If the file currently exists, it is
overwritten
on each run of the search unless append=1.
* The file will be placed in the same path as other CSV lookup files:
$SPLUNK_HOME/etc/apps/search/lookups.
* Defaults to empty string.
append = [1|0]
* Specifies whether to append results to the lookup file defined for the
filename attribute.
* Defaults to 0.
################################################################################
# summary_index: these settings are prefaced by the [summary_index]
stanza
################################################################################
[summary_index]
inline = [1|0]
* Specifies whether the summary index search command will run as part of
the
scheduled search or as a follow-on action. This is useful when the
results
of the scheduled search are expected to be large.
* Defaults to 1 (true).
_name = <string>
* The name of the summary index where Splunk will write the events.
* Defaults to "summary".
218
################################################################################
# populate_lookup: these settings are prefaced by the [populate_lookup]
stanza
################################################################################
[populate_lookup]
dest = <string>
* Name of the lookup table to populate (stanza name in transforms.conf)
or
the lookup file path to where you want the data written. If a path is
specified it MUST be relative to $SPLUNK_HOME and a valid lookups
directory.
For example: "etc/system/lookups/<file-name>" or
"etc/apps/<app>/lookups/<file-name>"
* The user executing this action MUST have write permissions to the app
for
this action to work properly.
[<custom_alert_action>]
alert_actions.conf.example
# Version 7.2.1
#
# This is an example alert_actions.conf. Use this file to configure
alert
# actions for saved searches.
#
# To use one or more of these configurations, copy the configuration
block into
# alert_actions.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[email]
# keep the search artifacts around for 24 hours
ttl = 86400
219
# if no @ is found in the address the hostname of the current machine is
appended
from = splunk
format = table
inline = false
sendresults = true
hostname = CanAccessFromTheWorld.com
use_tls = 1
sslVersions = tls1.2
sslVerifyServerCert = true
sslCommonNameToCheck = host1, host2
[rss]
# at most 30 items in the feed
items_count=30
[summary_index]
# don't need the artifacts anytime after they're in the summary index
ttl = 120
220
# make sure the following keys are not added to marker (command, ttl,
maxresults, _*)
command = summaryindex addtime=true
index="$action.summary_index._name{required=yes}$"
file="$name$_$#random$.stash" name="$name$"
marker="$action.summary_index*{format=$KEY=\\\"$VAL\\\",
key_regex="action.summary_index.(?!(?:command|maxresults|ttl|(?:_.*))$)(.*)"}$"
[custom_action]
# flag the action as custom alert action
is_custom = 1
app.conf
The following are the spec and example files for app.conf.
app.conf.spec
# Version 7.2.1
#
# This file maintains the state of a given app in Splunk Enterprise. It
may also be used
# to customize certain aspects of an app.
#
# There is no global, default app.conf. Instead, an app.conf may exist
in each
# app in Splunk Enterprise.
#
# You must restart Splunk Enterprise to reload manual changes to
app.conf.
#
# To learn more about configuration files (including precedence) please
see the
221
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# Settings for how an app appears in Launcher (and online on Splunkbase)
#
[author=<name>]
email = <e-mail>
company = <company-name>
[id]
group = <group-name>
name = <app-name>
version = <version-number>
[launcher]
# global setting
remote_tab = <bool>
* Set whether the Launcher interface will connect to apps.splunk.com.
* This setting only applies to the Launcher app and should be not set in
any
other app
* Defaults to true.
# per-application settings
222
description = <string>
* Short explanatory string displayed underneath the app's title in
Launcher.
* Descriptions should be 200 characters or less because most users won't
read
long descriptions!
author = <name>
* For apps you intend to post to Splunkbase, enter the username of your
splunk.com account.
* For internal-use-only apps, include your full name and/or contact
info
(e.g. email).
# Your app can include an icon which will show up next to your app in
Launcher
# and on Splunkbase. You can also include a screenshot, which will show
up on
# Splunkbase when the user views info about your app before downloading
it.
# You do not need to include an icon, but if you do, icon file names
must end
# with "Icon" before the file extension, and the "I" must be
capitalized. For
# example, "mynewIcon.png".
# Screenshots are optional.
#
# There is no setting in app.conf for these images. Splunk Web places
files you
# upload into the <app_directory>/appserver/static directory. These
images will
# not appear in your app.
#
# Move or place icon images to the <app_directory>/static directory.
# Move or place screenshot images to the <app_directory>/default/static
directory.
# Launcher and Splunkbase will automatically detect the images.
#
# For example:
#
# <app_directory>/static/appIcon.png (the capital "I" is
required!)
# <app_directory>/default/static/screenshot.png
#
# An icon image must be a 36px by 36px PNG file.
# An app screenshot must be 623px by 350px PNG file.
#
#
# [package] defines upgrade-related metadata, and will be
# used in future versions of Splunk Enterprise to streamline app
upgrades.
#
223
[package]
id = <appid>
* id should be omitted for internal-use-only apps which are not
intended to be
uploaded to Splunkbase
* id is required for all new apps uploaded to Splunkbase. Future
versions of
Splunk Enterprise will use appid to correlate locally-installed apps
and the
same app on Splunkbase (e.g. to notify users about app updates)
* id must be the same as the folder name in which your app lives in
$SPLUNK_HOME/etc/apps
* id must adhere to cross-platform folder-name restrictions:
* must contain only letters, numbers, "." (dot), and "_" (underscore)
characters
* must not end with a dot character
* must not be any of the following names: CON, PRN, AUX, NUL,
COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9,
LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9
check_for_updates = <bool>
* Set whether Splunk Enterprise should check Splunkbase for updates to
this app.
* Defaults to true.
#
# Set install settings for this app
#
[install]
224
is_configured = true | false
* Stores indication of whether the application's custom setup has been
performed
* Defaults to false
build = <integer>
* Required.
* Must be a positive integer.
* Increment this whenever you change files in appserver/static.
* Every release must change both "version" and "build" settings.
* Ensures browsers don't use cached copies of old static files
in new versions of your app.
* Build is a single integer, unlike version which can be a complex
string
like 1.5.18.
install_source_checksum = <string>
* Records a checksum of the tarball from which a given app was
installed.
* Splunk Enterprise will automatically populate this value upon install.
* You should *not* set this value explicitly within your app!
#
# Handle reloading of custom .conf files (4.2+ versions only)
#
[triggers]
225
config file, a Splunk Enterprise restart will be required after every
state change.
* Specifying "simple" implies that Splunk Enterprise will take no
special action to
reload your custom conf file.
* Specify "access_endpoints" and a URL to a REST endpoint, and Splunk
Enterprise will
call its _reload() method at every app state change.
* Specify "http_get" and a URL to a REST endpoint, and Splunk Enterprise
will simulate
an HTTP GET request against this URL at every app state change.
* Specify "http_post" and a URL to a REST endpoint, and Splunk
Enterprise will simulate
an HTTP POST request against this URL at every app state change.
* "rest_endpoints" is reserved for Splunk Enterprise internal use for
reloading
restmap.conf.
* Examples:
[triggers]
#
# Set UI-specific settings for this app
#
[ui]
226
show_in_nav = true | false
* Indicates if this app should be shown in global app dropdown
label = <string>
* Defines the name of the app shown in the Splunk Enterprise GUI and
Launcher
* Recommended length between 5 and 80 characters.
* Must not include "Splunk For" prefix.
* Label is required.
* Examples of good labels:
IMAP Monitor
SQL Server Integration Services
FISMA Compliance
docs_section_override = <string>
* Defines override for auto-generated app-specific documentation links
* If not specified, app-specific documentation link will
include [<app-name>:<app-version>]
* If specified, app-specific documentation link will
include [<docs_section_override>]
* This only applies to apps with documentation on the Splunk
documentation site
attribution_link = <string>
* URL that users can visit to find third-party software credits and
attributions for assets the app uses.
* External links must start with http:// or https://.
* Values that do not start with http:// or https:// will be interpreted
as Quickdraw "location" strings
* and translated to internal documentation references.
setup_view = <string>
* Optional setting
* Defines custom setup view found within /data/ui/views REST endpoint
* If not specified, default to setup.xml
#
# Credential-verification scripting (4.2+ versions only)
# Credential entries are superseded by passwords.conf from 6.3 onwards.
# While the entries here are still honored post-6.3, updates to these
will occur in passwords.conf which will shadow any values present here.
#
227
[credentials_settings]
verify_script = <string>
* Optional setting.
* Command line to invoke to verify credentials used for this app.
* For scripts, the command line should include both the interpreter and
the
script for it to run.
* Example: "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/etc/apps/<myapp>/bin/$MY_SCRIPT"
* The invoked program is communicated with over standard in / standard
out via
the same protocol as splunk scripted auth.
* Paths incorporating variable expansion or explicit spaces must be
quoted.
* For example, a path including $SPLUNK_HOME should be quoted, as
likely
will expand to C:\Program Files\Splunk
[credential:<realm>:<username>]
password = <password>
* Password that corresponds to the given username for the given realm.
Note that realm is optional
* The password can be in clear text, however when saved from splunkd the
password will always be encrypted
[diag]
extension_script = <filename>
* Setting this variable declares that this app will put additional
information
into the troubleshooting & support oriented output of the 'splunk
diag'
command.
* Must be a python script.
* Must be a simple filename, with no directory separators.
* The script must exist in the 'bin' sub-directory in the app.
* Full discussion of the interface is located on the Developer portal.
See http://dev.splunk.com/view/SP-CAAAE8H
* Defaults to unset, no app-specific data collection will occur.
228
added to the diag by the app extension.
* Large diags damage the main functionality of the tool by creating
data blobs
too large to copy around or upload.
* Use this setting to ensure that your extension script does not
accidentally
produce far too much data.
* Once data produced by this app extension reaches the limit, diag will
not add
any further files on behalf of the extension.
* After diag has finished adding a file which goes over this limit, all
further files
will not be added.
* Must be a positive number followed by a size suffix.
* Valid suffixes: b: bytes, kb: kilobytes, mb: megabytes, gb:
gigabytes
* Suffixes are case insensitive.
* Defaults to 100MB.
229
app.conf.example
# Version 7.2.1
#
# The following are example app.conf configurations. Configure
properties for
# your custom application.
#
# There is NO DEFAULT app.conf.
#
# To use one or more of these configurations, copy the configuration
block into
# app.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[launcher]
author=<author of app>
description=<textual description of app>
version=<version of app>
audit.conf
The following are the spec and example files for audit.conf.
audit.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values you can use to
configure
# auditing and event signing in audit.conf.
#
# There is NO DEFAULT audit.conf. To set custom configurations, place an
# audit.conf in $SPLUNK_HOME/etc/system/local/. For examples, see
230
# audit.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
########################################################################################
# KEYS: specify your public and private keys for encryption.
########################################################################################
queueing=[true|false]
* Turn off sending audit events to the indexQueue -- tail the audit
events
instead.
* If this is set to 'false', you MUST add an inputs.conf stanza to tail
the
audit log in order to have the events reach your index.
* Defaults to true.
audit.conf.example
# Version 7.2.1
#
# This is an example audit.conf. Use this file to configure auditing.
#
# There is NO DEFAULT audit.conf.
#
231
# To use one or more of these configurations, copy the configuration
block into
# audit.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
authentication.conf
The following are the spec and example files for authentication.conf.
authentication.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values for configuring
# authentication via authentication.conf.
#
# There is an authentication.conf in $SPLUNK_HOME/etc/system/default/.
To
# set custom configurations, place an authentication.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# authentication.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
232
are
# multiple default stanzas, attributes are combined. In the case of
# multiple definitions of the same attribute, the last definition in
the
# file wins.
# * If an attribute is defined at both the global level and in a
specific
# stanza, the value in the specific stanza takes precedence.
[authentication]
* Follow this stanza name with any number of the following
attribute/value
pairs.
authType = [Splunk|LDAP|Scripted|SAML|ProxySSO]
* Specify which authentication system to use.
* Supported values: Splunk, LDAP, Scripted, SAML, ProxySSO.
* Defaults to Splunk.
authSettings = <authSettings-key>,<authSettings-key>,...
* Key to look up the specific configurations of chosen authentication
system.
* <authSettings-key> is the name of a stanza header that specifies
attributes for scripted authentication, SAML, ProxySSO and for an
LDAP
strategy. Those stanzas are defined below.
* For LDAP, specify the LDAP strategy name(s) here. If you want Splunk
to
query multiple LDAP servers, enter a comma-separated list of all
strategies. Each strategy must be defined in its own stanza. The order
in
which you specify the strategy names will be the order Splunk uses to
query their servers when looking for a user.
* For scripted authentication, <authSettings-key> should be a single
stanza name.
passwordHashAlgorithm =
[SHA512-crypt|SHA256-crypt|SHA512-crypt-<num_rounds>|SHA256-crypt-<num_rounds>|MD5-crypt
* For the default "Splunk" authType, this controls how hashed passwords
are
stored in the $SPLUNK_HOME/etc/passwd file.
* "MD5-crypt" is an algorithm originally developed for FreeBSD in the
early
1990's which became a widely used standard among UNIX machines. It
was
also used by Splunk up through the 5.0.x releases. MD5-crypt runs the
salted password through a sequence of 1000 MD5 operations.
* "SHA256-crypt" and "SHA512-crypt" are newer versions that use 5000
rounds
of the SHA256 or SHA512 hash functions. This is slower than
MD5-crypt and
therefore more resistant to dictionary attacks. SHA512-crypt is used
233
for
system passwords on many versions of Linux.
* These SHA-based algorithm can optionally be followed by a number of
rounds
to use. For example, "SHA512-crypt-10000" will use twice as many
rounds
of hashing as the default implementation. The number of rounds must
be at
least 1000.
If you specify a very large number of rounds (i.e. more than 20x the
default value of 5000), splunkd may become unresponsive and
connections to
splunkd (from splunkweb or CLI) will time out.
* This setting only affects new password settings (either when a user is
added or a user's password is changed) Existing passwords will
continue
to work but retain their previous hashing algorithm.
* The default is "SHA512-crypt".
externalTwoFactorAuthVendor = <string>
* OPTIONAL.
* A valid multifactor vendor string will enable multifactor
authentication
and loads support for the corresponding vendor if supported by
Splunk.
* Empty string will disable multifactor authentication in Splunk.
* Currently Splunk supports Duo and RSA as multifactor authentication
vendors.
externalTwoFactorAuthSettings = <externalTwoFactorAuthSettings-key>
* OPTIONAL.
* Key to look up the specific configuration of chosen multifactor
authentication vendor.
LDAP settings
[<authSettings-key>]
* Follow this stanza name with the attribute/value pairs listed below.
* For multiple strategies, you will need to specify multiple instances
of
234
this stanza, each with its own stanza name and a separate set of
attributes.
* The <authSettings-key> must be one of the values listed in the
authSettings attribute, specified above in the [authentication]
stanza.
host = <string>
* REQUIRED
* This is the hostname of LDAP server.
* Be sure that your Splunk server can resolve the host name.
SSLEnabled = [0|1]
* OPTIONAL
* Defaults to disabled (0)
* See the file $SPLUNK_HOME/etc/openldap/ldap.conf for SSL LDAP
settings
port = <integer>
* OPTIONAL
* This is the port that Splunk should use to connect to your LDAP
server.
* Defaults to port 389 for non-SSL and port 636 for SSL
bindDN = <string>
* OPTIONAL, leave this blank to retrieve your LDAP entries using
anonymous bind (must be supported by the LDAP server)
* Distinguished name of the user that will be retrieving the LDAP
entries
* This user must have read access to all LDAP users and groups you wish
to
use in Splunk.
bindDNpassword = <password>
* OPTIONAL, leave this blank if anonymous bind is sufficient
* Password for the bindDN user.
userBaseDN = <string>
* REQUIRED
* This is the distinguished names of LDAP entries whose subtrees
contain the users
* Enter a ';' delimited list to search multiple trees.
userBaseFilter = <string>
* OPTIONAL
* This is the LDAP search filter you wish to use when searching for
users.
* Highly recommended, especially when there are many entries in your
LDAP
user subtrees
* When used properly, search filters can significantly speed up LDAP
queries
* Example that matches users in the IT or HR department:
235
* userBaseFilter = (|(department=IT)(department=HR))
* See RFC 2254 for more detailed information on search filter syntax
* This defaults to no filtering.
userNameAttribute = <string>
* REQUIRED
* This is the user entry attribute whose value is the username.
* NOTE: This attribute should use case insensitive matching for its
values,
and the values should not contain whitespace
* Usernames are case insensitive in Splunk
* In Active Directory, this is 'sAMAccountName'
* A typical attribute for this is 'uid'
realNameAttribute = <string>
* REQUIRED
* This is the user entry attribute whose value is their real name
(human readable).
* A typical attribute for this is 'cn'
emailAttribute = <string>
* OPTIONAL
* This is the user entry attribute whose value is their email address.
* Defaults to 'mail'
groupMappingAttribute = <string>
* OPTIONAL
* This is the user entry attribute whose value is used by group entries
to
declare membership.
* Groups are often mapped with user DN, so this defaults to 'dn'
* Set this if groups are mapped using a different attribute
* Usually only needed for OpenLDAP servers.
* A typical attribute used to map users to groups is 'uid'
* For example, assume a group declares that one of its members is
'splunkuser'
* This implies that every user with 'uid' value 'splunkuser' will
be
mapped to that group
groupBaseDN = [<string>;<string>;...]
* REQUIRED
* This is the distinguished names of LDAP entries whose subtrees
contain
the groups.
* Enter a ';' delimited list to search multiple trees.
* If your LDAP environment does not have group entries, there is a
configuration that can treat each user as its own group
* Set groupBaseDN to the same as userBaseDN, which means you will
search
for groups in the same place as users
* Next, set the groupMemberAttribute and groupMappingAttribute to the
236
same
attribute as userNameAttribute
* This means the entry, when treated as a group, will use the
username
value as its only member
* For clarity, you should probably also set groupNameAttribute to the
same
value as userNameAttribute as well
groupBaseFilter = <string>
* OPTIONAL
* The LDAP search filter Splunk uses when searching for static groups
* Like userBaseFilter, this is highly recommended to speed up LDAP
queries
* See RFC 2254 for more information
* This defaults to no filtering
dynamicGroupFilter = <string>
* OPTIONAL
* The LDAP search filter Splunk uses when searching for dynamic groups
* Only configure this if you intend to retrieve dynamic groups on your
LDAP server
* Example: '(objectclass=groupOfURLs)'
dynamicMemberAttribute = <string>
* OPTIONAL
* Only configure this if you intend to retrieve dynamic groups on your
LDAP server
* This is REQUIRED if you want to retrieve dynamic groups
* This attribute contains the LDAP URL needed to retrieve members
dynamically
* Example: 'memberURL'
groupNameAttribute = <string>
* REQUIRED
* This is the group entry attribute whose value stores the group name.
* A typical attribute for this is 'cn' (common name)
* Recall that if you are configuring LDAP to treat user entries as
their own
group, user entries must have this attribute
groupMemberAttribute = <string>
* REQUIRED
* This is the group entry attribute whose values are the groups members
* Typical attributes for this are 'member' and 'memberUid'
* For example, consider the groupMappingAttribute example above using
groupMemberAttribute 'member'
* To declare 'splunkuser' as a group member, its attribute 'member'
must
have the value 'splunkuser'
nestedGroups = <bool>
237
* OPTIONAL
* Controls whether Splunk will expand nested groups using the
'memberof' extension.
* Set to 1 if you have nested groups you want to expand and the
'memberof'
* extension on your LDAP server.
charset = <string>
* OPTIONAL
* ONLY set this for an LDAP setup that returns non-UTF-8 encoded data.
LDAP
is supposed to always return UTF-8 encoded data (See RFC 2251), but
some
tools incorrectly return other encodings.
* Follows the same format as CHARSET in props.conf (see props.conf.spec)
* An example value would be "latin-1"
anonymous_referrals = <bool>
* OPTIONAL
* Set this to 0 to turn off referral chasing
* Set this to 1 to turn on anonymous referral chasing
* IMPORTANT: We only chase referrals using anonymous bind. We do NOT
support
rebinding using credentials.
* If you do not need referral support, we recommend setting this to 0
* If you wish to make referrals work, set this to 1 and ensure your
server
allows anonymous searching
* Defaults to 1
sizelimit = <integer>
* OPTIONAL
* Limits the amount of entries we request in LDAP search
* IMPORTANT: The max entries returned is still subject to the maximum
imposed by your LDAP server
* Example: If you set this to 5000 and the server limits it to 1000,
you'll still only get 1000 entries back
* Defaults to 1000
timelimit = <integer>
* OPTIONAL
* Limits the amount of time in seconds we will wait for an LDAP search
request to complete
* If your searches finish quickly, you should lower this value from the
default
* Defaults to 15 seconds
* Maximum value is 30 seconds
network_timeout = <integer>
* OPTIONAL
* Limits the amount of time a socket will poll a connection without
activity
238
* This is useful for determining if your LDAP server cannot be reached
* IMPORTANT: As a connection could be waiting for search results, this
value
must be higher than 'timelimit'
* Like 'timelimit', if you have a fast connection to your LDAP server,
we
recommend lowering this value
* Defaults to 20
Map roles
[roleMap_<authSettings-key>]
* The mapping of Splunk roles to LDAP groups for the LDAP strategy
specified
by <authSettings-key>
* IMPORTANT: this role mapping ONLY applies to the specified strategy.
* Follow this stanza name with several Role-to-Group(s) mappings as
defined
below.
* Note: Importing groups for the same user from different strategies is
not
supported.
Scripted authentication
[<authSettings-key>]
* Follow this stanza name with the following attribute/value pairs:
scriptPath = <string>
* REQUIRED
* This is the full path to the script, including the path to the program
that runs it (python)
* For example: "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/etc/system/bin/$MY_SCRIPT"
* Note: If a path contains spaces, it must be quoted. The example above
handles the case where SPLUNK_HOME contains a space
239
scriptSearchFilters = [1|0]
* OPTIONAL - Only set this to 1 to call the script to add search
filters.
* 0 disables (default)
[cacheTiming]
* Use these settings to adjust how long Splunk will use the answers
returned
from script functions before calling them again.
[splunk_auth]
240
* Settings for Splunk's internal authentication system.
241
expirePasswordDays = <positive integer>
* Specifies the number of days before the password expires after a
reset.
* Minimum value: 0
* Maximum value: 3650
* Default: 90
* Splunk software ignores negative values.
* This setting is optional.
expireUserAccounts = <boolean>
* Specifies whether password expiration is enabled.
* Defaults to false (user passwords do not expire).
* This setting is optional.
forceWeakPasswordChange = <boolean>
* Specifies whether users must change a weak password.
* Defaults to false (users can keep weak password).
* This setting is optional.
lockoutUsers = <boolean>
* Specifies whether locking out users is enabled.
* Defaults to true (users will be locked out on incorrect logins).
* This setting is optional.
* If you enable this setting on members of a search head cluster, user
lockout
state applies only per SHC member, not to the entire cluster.
242
is locked out.
* The unsuccessful login attempts must occur within
'lockoutThresholdMins' minutes.
* Any value less than 1 will be ignored.
* Minimum value: 1
* Maximum value: 64
* Default: 5
* This setting is optional.
* If you enable this setting on members of a search head cluster, user
lockout
state applies only per SHC member, not to the entire cluster.
enablePasswordHistory = <boolean>
* Specifies whether password history is enabled.
* Defaults to false.
* When set to true, Splunk software maintains a history of passwords
that have been used previously.
* This setting is optional.
constantLoginTime = <number>
* The amount of time, in seconds, that the authentication manager
* waits before returning any kind of response to a login request.
* When you set this setting, login will be guaranteed to take the
* amount of time you specify. If necessary, the authentication manager
* adds a delay to the actual response time to keep this guarantee.
* This setting is optional.
* Minimum value: 0 (Disables login time guarantee)
* Maximum value: 5.0
* Default: 0
243
verboseLoginFailMsg = <boolean>
* Specifies whether or not the login failure message explains
the failure reason.
* When set to true, Splunk software displays a message on login
along with the failure reason.
* When set to false, Splunk software displays a generic failure
message without a specific failure reason.
* This setting is optional.
* Default: true
SAML settings
[<saml-authSettings-key>]
* Follow this stanza name with the attribute/value pairs listed below.
* The <authSettings-key> must be one of the values listed in the
* authSettings attribute, specified above in the [authentication]
stanza.
fqdn = <string>
* OPTIONAL
* The fully qualified domain name where this splunk instance is
running.
* If this value is not specified, Splunk will default to the value
specified
in server.conf.
* If this value is specified and 'http://' or 'https://' prefix is not
present, splunk will use the ssl setting for splunkweb.
* Splunk will use this information to populate the
'assertionConsumerServiceUrl'.
idpSSOUrl = <url>
* REQUIRED
* The protocol endpoint on the IDP (Identity Provider) where the
AuthNRequests should be sent.
* SAML requests will fail if this information is missing.
idpAttributeQueryUrl = <url>
244
* OPTIONAL
* The protocol endpoint on the IDP (Identity Provider) where the
attribute
query requests should be sent.
* Attribute queries can be used to get the latest 'role' information,
if there is support for Attribute queries on the IDP.
* When this setting is absent, Splunk will cache the role information
from the saml
assertion and use it to run saved searches.
idpCertPath = <Pathname>
* OPTIONAL
* This setting is required if 'signedAssertion' is set to true.
* This value is relative to $SPLUNK_HOME/etc/auth/idpCerts.
* The value for this setting can be the name of the certificate file or
a directory.
* If it is empty, Splunk will automatically verify with certificates in
all subdirectories
present in $SPLUNK_HOME/etc/auth/idpCerts.
* If the saml response is to be verified with a IDP (Identity Provider)
certificate that
is self signed, then this setting holds the filename of the
certificate.
* If the saml response is to be verified with a certificate that is a
part of a
certificate chain(root, intermediate(s), leaf), create a subdirectory
and place the
certificate chain as files in the subdirectory.
* If there are multiple end certificates, create a subdirectory such
that, one subdirectory
holds one certificate chain.
* If multiple such certificate chains are present, the assertion is
considered verified,
if validation succeeds with any certifcate chain.
* The file names within a certificate chain should be such that root
certificate is alphabetically
before the intermediate which is alphabetically before of the end
cert.
ex. cert_1.pem has the root, cert_2.pem has the first intermediate
cert, cert_3.pem has the second
intermediate certificate and cert_4.pem has the end certificate.
idpSLOUrl = <url>
* OPTIONAL
* The protocol endpoint on the IDP (Identity Provider) where a SP
(Service Provider) initiated Single logout request should be sent.
errorUrl = <url>
* OPTIONAL
* The url to be displayed for a SAML error. Errors may be due to
erroneous or incomplete configuration in either the IDP or Splunk.
This url can be absolute or relative. Absolute url should follow
245
pattern
<protocol>:[//]<host> e.g. https://www.external-site.com.
Relative urls should start with '/'. A relative url will show up as an
internal link of the splunk instance, e.g.
https://splunkhost:port/relativeUrlWithSlash
errorUrlLabel = <string>
* OPTIONAL
* Label or title of the content pointed to by errorUrl.
entityId = <string>
* REQUIRED
* The entity id for SP connection as configured on the IDP.
issuerId = <string>
* REQUIRED
* The unique identifier of the identity provider.
The value of this setting corresponds to attribute "entityID" of
"EntityDescriptor" node in IdP metadata document.
* If you configure SAML using IdP metadata, this field will be extracted
from
the metadata.
* If you configure SAML manually, then you must configure this setting.
* When Splunk software tries to verify the SAML response, the issuerId
specified here must match the 'Issuer' field in the SAML response.
Otherwise,
validation of the SAML response will fail.
signedAssertion = [true|false]
* OPTIONAL
* This tells Splunk if the SAML assertion has been signed by the IDP
* If set to false, Splunk will not verify the signature of the
assertion
using the certificate of the IDP.
* Currently, we accept only signed assertions.
* Defaults to true.
attributeQuerySoapPassword = <password>
* OPTIONAL
* This setting is required if 'attributeQueryUrl' is specified.
* Attribute query requests are made using SOAP using basic
authentication
* The password to be used when making an attribute query request.
* This string will obfuscated upon splunkd startup.
attributeQuerySoapUsername = <string>
* OPTIONAL
246
* This setting is required if 'attributeQueryUrl' is specified.
* Attribute Query requests are made using SOAP using basic
authentication
* The username to be used when making an attribute query request.
redirectAfterLogoutToUrl = <url>
* OPTIONAL
* The user will be redirected to this url after logging out of Splunk.
* If this is not specified and a idpSLO is also missing, the user will
be
redirected to splunk.com after logout.
maxAttributeQueryThreads = <int>
* OPTIONAL
* Defaults to 2, max is 10
* Number of threads to use to make attribute query requests.
* Changes to this will require a restart to take effect.
maxAttributeQueryQueueSize = <int>
* OPTIONAL
* Defaults to 50
* The number of attribute query requests to queue, set to 0 for infinite
size.
* Changes to this will require a restart to take effect.
247
attributeQueryTTL = <ttl in seconds>
* OPTIONAL
* Determines the time for which Splunk will cache the user and role
information.
* Once the ttl expires, Splunk will make an attribute query request to
retrieve the role information.
* Default ttl if not specified, is 3600 seconds.
sslVersions = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2"
* If not set, defaults to the setting in server.conf.
sslCommonNameToCheck = <commonName>
* OPTIONAL
* If this value is set, and 'sslVerifyServerCert' is set to true,
splunkd will limit most outbound HTTPS connections to hosts which use
a cert with this common name.
* If not set, Splunk uses the setting specified in server.conf.
ecdhCurveName = <string>
* DEPRECATED; use 'ecdhCurves' instead.
* ECDH curve to use for ECDH key negotiation.
* If not set, Splunk uses the setting specified in server.conf.
248
ecdhCurves = <comma separated list of ec curves>
* ECDH curves to use for ECDH key negotiation.
* The curves should be specified in the order of preference.
* The client sends these curves as a part of Client Hello.
* The server supports only the curves specified in the list.
* We only support named curves specified by their SHORT names.
(see struct ASN1_OBJECT in asn1.h)
* The list of valid named curves by their short/long names can be
obtained
by executing this command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default is empty string.
* e.g. ecdhCurves = prime256v1,secp384r1,secp521r1
* If not set, Splunk uses the setting specified in server.conf.
clientCert = <path>
* Full path to the client certificate PEM format file.
* Certificates are auto-generated upon first starting Splunk.
* You may replace the auto-generated certificate with your own.
* Default is $SPLUNK_HOME/etc/auth/server.pem.
* If not set, Splunk uses the setting specified in
server.conf/[sslConfig]/serverCert.
sslKeysfile = <filename>
* DEPRECATED; use 'clientCert' instead.
* File is in the directory specified by 'caPath' (see below).
* Default is server.pem.
sslPassword = <password>
* Optional server certificate password.
* If unset, Splunk uses the setting specified in server.conf.
* Default is password.
sslKeysfilePassword = <password>
* DEPRECATED; use 'sslPassword' instead.
caCertFile = <filename>
* OPTIONAL
* Public key of the signing authority.
* Default is cacert.pem.
* If not set, Splunk uses the setting specified in server.conf.
caPath = <path>
* DEPRECATED; use absolute paths for all certificate files.
* If certificate files given by other settings in this stanza are not
absolute
paths, then they will be relative to this path.
* Default is $SPLUNK_HOME/etc/auth.
sslVerifyServerCert = <bool>
* OPTIONAL
* Used by distributed search: when making a search request to another
249
server in the search cluster.
* If not set, Splunk uses the setting specified in server.conf.
nameIdFormat = <string>
* OPTIONAL
* If supported by IDP, while making SAML Authentication request this
value can
be used to specify the format of the Subject returned in SAML
Assertion.
ssoBinding = <string>
* OPTIONAL
* This is the binding that will be used when making a SP-initiated saml
request.
* Acceptable options are 'HTTPPost' and 'HTTPRedirect'
* Defaults to 'HTTPPost'
* This binding must match the one configured on the IDP.
sloBinding = <string>
* OPTIONAL
* This is the binding that will be used when making a logout request or
sending a logout
* response to complete the logout workflow.
* Acceptable options are 'HTTPPost' and 'HTTPRedirect'
* Defaults to 'HTTPPost'
* This binding must match the one configured on the IDP.
250
inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256
* Allows only SAML responses that are signed using any one of the
specified
algorithms.
* This setting is applicable for both HTTP POST and HTTP Redirect
binding.
* Provide a semicolon-separated list of signature algorithms for the
SAML responses
that you want Splunk Web to accept. Splunk software rejects any SAML
responses
that are not signed by the specified algorithms.
* For improved security, set it to 'RSA-SHA256'.
* OPTIONAL
* Defaults to 'RSA-SHA1;RSA-SHA256'.
replicateCertificates = <boolean>
* OPTIONAL
* Enabled by default, IdP certificate files will be replicated across
search head cluster setup.
* If disabled, IdP certificate files needs to be replicated manually
across SHC or else
verification of SAML signed assertions will fail.
* This setting will have no effect if search head clustering is
disabled.
Map roles
[roleMap_<saml-authSettings-key>]
* The mapping of Splunk roles to SAML groups for the SAML stanza
specified
by <authSettings-key>
* If a SAML group is not explicitly mapped to a Splunk role, but has
same name as a valid Splunk role then for ease of configuration, it
is
auto-mapped to that Splunk role.
* Follow this stanza name with several Role-to-Group(s) mappings as
defined
below.
251
SAML User Roles Map
[userToRoleMap_<saml-authSettings-key>]
* The mapping of SAML user to Splunk roles, realname and email,
for the SAML stanza specified by <authSettings-key>
* Follow this stanza name with several User-to-Role::Realname::Email
mappings
as defined below.
* The stanza is used only when the IDP does not support Attribute Query
Request
[authenticationResponseAttrMap_SAML]
* Splunk expects email, real name and roles to be returned as SAML
Attributes in SAML assertion. This stanza can be used to map attribute
names
to what Splunk expects. These are optional settings and are only
needed for
certain IDPs.
role = <string>
* OPTIONAL
* Attribute name to be used as role in SAML Assertion.
* Default is "role"
realName = <string>
* OPTIONAL
* Attribute name to be used as realName in SAML Assertion.
* Default is "realName"
mail = <string>
* OPTIONAL
* Attribute name to be used as email in SAML Assertion.
* Default is "mail"
252
Settings for Proxy SSO mode
[roleMap_proxySSO]
[userToRoleMap_proxySSO]
* The mapping of ProxySSO user to Splunk roles
* Follow this stanza name with several User-to-Role(s) mappings as
defined
below.
[proxysso-authsettings-key]
* Follow this stanza name with the attribute/value pairs listed below.
253
* Comma separated list of user names from the proxy server headers to be
blacklisted by splunk platform.
Secret Storage
[secrets]
disabled = <bool>
* Toggles integration with platform-provided secret storage facilities.
* Defaults to false if Common Criteria mode is enabled.
* Defaults to true if Common Criteria mode is disabled.
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
filename = <filename>
* Designates a Python script that integrates with platform-provided
secret storage facilities, like the GNOME keyring.
* <filename> should be the name of a Python script located in one of the
following directories:
$SPLUNK_HOME/etc/apps/*/bin
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/searchscripts
* <filename> should be a pure basename; it should contain no path
separators.
* <filename> should end with a .py file extension.
namespace = <string>
* Use an instance-specific string as a namespace within secret storage.
* When using the GNOME keyring, this namespace is used as a keyring
name.
* If multiple Splunk instances must store separate sets of secrets
within the
same storage backend, this value should be customized to be unique for
each
Splunk instance.
* Defaults to "splunk".
[<duo-externalTwoFactorAuthSettings-key>]
* <duo-externalTwoFactorAuthSettings-key> must be the value listed in
the
254
externalTwoFactorAuthSettings attribute, specified above in the
[authentication]
stanza.
* This stanza contains Duo specific multifactor authentication settings
and will be
activated only when externalTwoFactorAuthVendor is Duo.
* All the below attributes except appSecretKey would be provided by Duo.
apiHostname = <string>
* REQUIRED
* Duo's API endpoint which performs the actual multifactor
authentication.
* e.g. apiHostname = api-xyz.duosecurity.com
integrationKey = <string>
* REQUIRED
* Duo's integration key for splunk. Must be of size = 20.
* Integration key will be obfuscated before being saved here for
security.
secretKey = <string>
* REQUIRED
* Duo's secret key for splunk. Must be of size = 40.
* Secret key will be obfuscated before being saved here for security.
appSecretKey = <string>
* REQUIRED
* Splunk application specific secret key which should be random and
locally generated.
* Must be atleast of size = 40 or longer.
* This secret key would not be shared with Duo.
* Application secret key will be obfuscated before being saved here for
security.
failOpen = <bool>
* OPTIONAL
* Defaults to false if not set.
* If set to true, Splunk will bypass Duo multifactor authentication when
the service is
unavailable.
timeout = <int>
* OPTIONAL
* It determines the connection timeout in seconds for the outbound duo
HTTPS connection.
* If not set, Splunk will use its default HTTPS connection timeout
which is 12 seconds.
sslVersions = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support for incoming
connections.
255
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* If not set, Splunk uses the sslVersions provided in server.conf
sslVerifyServerCert = <bool>
* OPTIONAL
* Defaults to false if not set.
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
sslRootCAPath = <path>
* OPTIONAL
* Not set by default.
* The <path> must refer to full path of a PEM format file containing
one or more
root CA certificates concatenated together.
* This Root CA must match the CA in the certificate chain of the SSL
certificate
returned by duo server.
useClientSSLCompression = <bool>
* OPTIONAL
* If set to true on client side, compression is enabled between the
server and client
as long as the server also supports it.
* If not set, Splunk uses the client SSL compression setting provided in
256
server.conf
[<rsa-externalTwoFactorAuthSettings-key>]
* <rsa-externalTwoFactorAuthSettings-key> must be the value listed in
the
externalTwoFactorAuthSettings attribute, specified above in the
[authentication]
stanza.
* This stanza contains RSA specific multifactor authentication settings
and will be
activated only when externalTwoFactorAuthVendor is RSA.
* All the below attributes can be obtained from RSA Authentication
Manager 8.2 SP1.
authManagerUrl = <string>
* REQUIRED
* URL of REST endpoint of RSA Authentication Manager
* Splunk will send authentication requests to this URL.
* URL should be https based. Splunk will not support communication over
http.
accessKey = <string>
* REQUIRED
* Access key needed by Splunk to communicate with RSA Authentication
Manager.
clientId = <string>
* REQUIRED
* Agent name created on RSA Authentication Manager is clientId.
failOpen = <bool>
* OPTIONAL
* If true, allow login in case authentication server is unavailable.
* Default: false.
timeout = <int>
* OPTIONAL
* It determines the connection timeout in seconds for the outbound HTTPS
connection.
* Default: 5.
messageOnError = <string>
* OPTIONAL
* Message that will be shown to user in case of login failure.
* You can specify contact of admin or link to diagnostic page.
257
sslVersions = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support for incoming
connections.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* If not set, Splunk uses the 'sslVersions' specified in server.conf
* Default: tls1.2
sslVerifyServerCert = <bool>
* OPTIONAL
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
* Default: true.
sslRootCAPath = <path>
* REQUIRED
* Not set by default.
* The <path> must refer to full path of a PEM format file containing
one or more
root CA certificates concatenated together.
* This Root CA must match the CA in the certificate chain of the SSL
certificate
returned by RSA server.
sslVersionsForClient = <versions_list>
258
* OPTIONAL
* Comma-separated list of SSL versions to support for outgoing HTTP
connections.
* If not set, Splunk uses the 'sslVersionsForClient' specified in
server.conf
* Default: tls1.2
replicateCertificates = <boolean>
* OPTIONAL
* If enabled, RSA certificate files will be replicated across search
head cluster setup.
* If disabled, RSA certificate files need to be replicated manually
across SHC or else
2FA verification will fail.
* This setting will have no effect if search head clustering is
disabled.
* Default: true
enableMfaAuthRest = <boolean>
* Determines whether or not splunkd requires RSA two-factor
authentication
against REST endpoints.
* When two-factor authentication is enabled for REST endpoints, either
you
must log in to the Splunk instance with a valid RSA passcode, or
requests
to those endpoints must include a valid token in the following
format,
for example: "curl -k -u <username>:<password>:<token> -X GET
<resource>"
* If set to "true", splunkd requires RSA REST two-factor authentication.
* If set to "false", splunkd does not require REST two-factor
authentication.
* Optional.
* Default: false
authentication.conf.example
# Version 7.2.1
#
# This is an example authentication.conf. authentication.conf is used
to
# configure LDAP, Scripted, SAML and Proxy SSO authentication in
addition
# to Splunk's native authentication.
#
# To use one of these configurations, copy the configuration block into
# authentication.conf in $SPLUNK_HOME/etc/system/local/. You must
reload
259
# auth in manager or restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[ldaphost]
host = ldaphost.domain.com
port = 389
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = password
userBaseDN = ou=People,dc=splunk,dc=com
userBaseFilter = (objectclass=splunkusers)
groupBaseDN = ou=Groups,dc=splunk,dc=com
groupBaseFilter = (objectclass=splunkgroups)
userNameAttribute = uid
realNameAttribute = givenName
groupMappingAttribute = dn
groupMemberAttribute = uniqueMember
groupNameAttribute = cn
timelimit = 10
network_timeout = 15
#### Example using the same server as 'ldaphost', but treating each user
as
#### their own group
[authentication]
authType = LDAP
authSettings = ldaphost_usergroups
[ldaphost_usergroups]
host = ldaphost.domain.com
260
port = 389
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = password
userBaseDN = ou=People,dc=splunk,dc=com
userBaseFilter = (objectclass=splunkusers)
groupBaseDN = ou=People,dc=splunk,dc=com
groupBaseFilter = (objectclass=splunkusers)
userNameAttribute = uid
realNameAttribute = givenName
groupMappingAttribute = uid
groupMemberAttribute = uid
groupNameAttribute = uid
timelimit = 10
network_timeout = 15
[roleMap_ldaphost_usergroups]
admin = admin_user1;admin_user2;admin_user3;admin_user4
power = power_user1;power_user2
user = user1;user2;user3
[AD]
SSLEnabled = 1
bindDN = [email protected]
bindDNpassword = ldap_bind_user_password
groupBaseDN = CN=Groups,DC=splunksupport,DC=kom
groupBaseFilter =
groupMappingAttribute = dn
groupMemberAttribute = member
groupNameAttribute = cn
host = ADbogus.splunksupport.kom
port = 636
realNameAttribute = cn
userBaseDN = CN=Users,DC=splunksupport,DC=kom
userBaseFilter =
userNameAttribute = sAMAccountName
timelimit = 15
network_timeout = 20
anonymous_referrals = 0
[roleMap_AD]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
261
authSettings = SunLDAP
authType = LDAP
[SunLDAP]
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = Directory_Manager_Password
groupBaseDN = ou=Groups,dc=splunksupport,dc=com
groupBaseFilter =
groupMappingAttribute = dn
groupMemberAttribute = uniqueMember
groupNameAttribute = cn
host = ldapbogus.splunksupport.com
port = 389
realNameAttribute = givenName
userBaseDN = ou=People,dc=splunksupport,dc=com
userBaseFilter =
userNameAttribute = uid
timelimit = 5
network_timeout = 8
[roleMap_SunLDAP]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
[OpenLDAP]
bindDN = uid=directory_bind,cn=users,dc=osx,dc=company,dc=com
bindDNpassword = directory_bind_account_password
groupBaseFilter =
groupNameAttribute = cn
SSLEnabled = 0
port = 389
userBaseDN = cn=users,dc=osx,dc=company,dc=com
host = hostname_OR_IP
userBaseFilter =
userNameAttribute = uid
groupMappingAttribute = uid
groupBaseDN = dc=osx,dc=company,dc=com
groupMemberAttribute = memberUid
realNameAttribute = cn
timelimit = 5
network_timeout = 8
dynamicGroupFilter = (objectclass=groupOfURLs)
dynamicMemberAttribute = memberURL
nestedGroups = 1
262
[roleMap_OpenLDAP]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
[script]
scriptPath = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/share/splunk/authScriptSamples/radiusScripted.py"
[script]
scriptPath = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/share/splunk/authScriptSamples/pamScripted.py"
[authentication]
authSettings = samlv2
authType = SAML
[samlv2]
attributeQuerySoapPassword = changeme
attributeQuerySoapUsername = test
entityId = test-splunk
idpAttributeQueryUrl = https://exsso/idp/attrsvc.ssaml2
idpCertPath = /home/splunk/etc/auth/idp.crt
263
idpSSOUrl = https://exsso/idp/SSO.saml2
idpSLOUrl = https://exsso/idp/SLO.saml2
signAuthnRequest = true
signedAssertion = true
attributeQueryRequestSigned = true
attributeQueryResponseSigned = true
redirectPort = 9332
cipherSuite = TLSv1 MEDIUM:@STRENGTH
nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
[roleMap_SAML]
admin = SplunkAdmins
power = SplunkPowerUsers
user = all
[userToRoleMap_SAML]
samluser = user::Saml Real Name::[email protected]
[authenticationResponseAttrMap_SAML]
role = "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups"
mail =
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
realName = "http://schemas.microsoft.com/identity/claims/displayname"
[authentication]
authSettings = my_proxy
authType = ProxySSO
[my_proxy]
blacklistedUsers = user1,user2
blacklistedAutoMappedRoles = admin
defaultRoleIfMissing = user
[roleMap_proxySSO]
admin = group1;group2
user = group1;group3
[userToRoleMap_proxySSO]
264
proxy_user1 = user
proxy_user2 = power;can_delete
[splunk_auth]
minPasswordLength = 8
minPasswordUppercase = 1
minPasswordLowercase = 1
minPasswordSpecial = 1
minPasswordDigit = 0
expirePasswordDays = 90
expireAlertDays = 15
expireUserAccounts = true
forceWeakPasswordChange = false
lockoutUsers = true
lockoutAttempts = 5
lockoutThresholdMins = 5
lockoutMins = 30
enablePasswordHistory = false
passwordHistoryCount = 24
authorize.conf
The following are the spec and example files for authorize.conf.
authorize.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for creating roles
in
# authorize.conf. You can configure roles and granular access controls
by
# creating your own authorize.conf.
265
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[default]
srchFilterSelecting = <boolean>
* Determines whether a role's search filters will be used for selecting
or
eliminating during role inheritance.
* Selecting will join the search filters with an OR when combining the
filters.
* Eliminating will join the search filters with an AND when combining
the
filters.
* All roles will default to true (in other words, selecting).
* Example:
* role1 srchFilter = sourcetype!=ex1 with selecting=true
* role2 srchFilter = sourcetype=ex2 with selecting = false
* role3 srchFilter = sourcetype!=ex3 AND index=main with selecting =
true
* role3 inherits from role2 and role 2 inherits from role1
* Resulting srchFilter = ((sourcetype!=ex1) OR
(sourcetype!=ex3 AND index=main)) AND ((sourcetype=ex2))
[capability::<capability>]
266
authorize.conf
under the 'default' directory
* Only alphanumeric characters and "_" (underscore) are allowed in
capability names.
Examples:
* edit_visualizations
* view_license1
* Descriptions of specific capabilities are listed below.
[role_<roleName>]
<capability> = <enabled>
* A capability that is enabled for this role.
* You can list many of these.
* Note that 'enabled' is the only accepted value here, as capabilities
are
disabled by default.
* Roles inherit all capabilities from imported roles, and inherited
capabilities cannot be disabled.
* Role names cannot have uppercase characters. User names, however, are
case-insensitive.
importRoles = <string>
* Semicolon delimited list of other roles and their associated
capabilities
that should be imported.
* Importing other roles also imports the other aspects of that role,
such as
allowed indexes to search.
* By default a role imports no other roles.
grantableRoles = <string>
* Semicolon delimited list of roles that can be granted when edit_user
capability is present.
* By default, a role with 'edit_user' capability can create/edit a user
and
assign any role to them. Roles assigned to users can be restricted by
assigning
'edit_grantable_role' capability and specifying the roles in
'grantableRoles'.
When you set `grantableRoles`, the roles that can be assigned will be
restricted to the ones whose capabilities are a proper subset of those
in the
roles provided.
* For a role that has no edit_user capability, grantableRoles has no
effect.
* NOTE: A role that has been assigned 'grantableRoles' can list only
the users
whose capabilities are a subset of all capabilities of the roles
assigned to
267
'grantableRoles'.
* Example:
Consider a Splunk instance where role1-4 are assigned the following
capabilities:
role1: c1, c2, c3
role2: c4, c5, c6
role3: c1, c6
role4: c4, c8
[role_admin]
grantableRoles = role1;role2
For the above configuration, the admin user can list/edit only user1,
user2
and user3 and can only assign roles role1, role2, and role3 to those
users.
* Defaults to not present.
srchFilter = <string>
* Semicolon delimited list of search filters for this Role.
* By default we perform no search filtering.
* To override any search filters from imported roles, set this to '*',
as
the 'admin' role does.
srchTimeWin = <number>
* Maximum time span of a search, in seconds.
* This time window limit is applied backwards from the latest time
specified in a search.
* By default, searches are not limited to any specific time window.
* To override any search time windows from imported roles, set this to
'0'
(infinite), as the 'admin' role does.
* -1 is a special value that implies no search window has been set for
this role
* This is equivalent to not setting srchTimeWin at all, which means
it
can be easily overridden by an imported role
srchDiskQuota = <number>
* Maximum amount of disk space (MB) that can be used by search jobs of a
268
user that belongs to this role
* In search head clustering environments, this setting takes effect on
a
per-member basis. There is no cluster-wide accounting.
* The dispatch manager checks the quota at the dispatch time of a search
and additionally the search process will check at intervals that are
defined
in the 'disk_usage_update_period' setting in limits.conf as long as
the
search is active.
* The quota can be exceeded at times, since the search process does not
check
the quota constantly.
* Exceeding this quota causes the search to be auto-finalized
immediately,
even if there are results that have not yet been returned.
* Defaults to '100', for 100 MB.
srchJobsQuota = <number>
* Maximum number of concurrently running historical searches a member of
this role can have.
* This excludes real-time searches, see rtSrchJobsQuota.
* Defaults to 3.
rtSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches a member of
this
role can have.
* Defaults to 6.
srchMaxTime = <number><unit>
* Maximum amount of time that searches of users from this role will be
allowed to run.
* Once the search has been ran for this amount of time it will be auto
finalized, If the role
* Inherits from other roles, the maximum srchMaxTime value specified in
the
included roles.
* This maximum does not apply to real-time searches.
* Examples: 1h, 10m, 2hours, 2h, 2hrs, 100s
* Defaults to 100days
srchIndexesDefault = <string>
* A semicolon-delimited list of indexes to search when no index is
specified.
* These indexes can be wild-carded ("*"), with the exception that '*'
does not
match internal indexes.
* To match internal indexes, start with '_'. All internal indexes are
represented by '_*'.
* The wildcard character '*' is limited to match either all the
non-internal
269
indexes or all the internal indexes, but not both at once.
* If you make any changes in the "Indexes searched by default" Settings
panel
for a role in Splunk Web, those values take precedence, and any
wildcards
you specify in this setting are lost.
* Defaults to none.
srchIndexesAllowed = <string>
* Semicolon delimited list of indexes this role is allowed to search
* Follows the same wildcarding semantics as srchIndexesDefault
* If you make any changes in the "Indexes" Settings panel
for a role in Splunk Web, those values take precedence, and any
wildcards
you specify in this setting are lost.
* Defaults to none.
deleteIndexesAllowed = <string>
* Semicolon delimited list of indexes this role is allowed to delete
* This setting must be used in conjunction with the delete_by_keyword
capability
* Follows the same wildcarding semantics as srchIndexesDefault
* Defaults to none
cumulativeSrchJobsQuota = <number>
* Maximum number of concurrently running historical searches in total
across all members of this role
* Requires enable_cumulative_quota = true in limits.conf to take
effect.
* If a user belongs to multiple roles, the user's searches count
against
the role with the largest cumulative search quota. Once the quota for
that role is consumed, the user's searches count against the role with
the next largest quota, and so on.
* In search head clustering environments, this setting takes effect on
a
per-member basis. There is no cluster-wide accounting.
cumulativeRTSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches in total
across all members of this role
* Requires enable_cumulative_quota = true in limits.conf to take
effect.
* If a user belongs to multiple roles, the user's searches count
against
the role with the largest cumulative search quota. Once the quota for
that role is consumed, the user's searches count against the role with
the next largest quota, and so on.
* In search head clustering environments, this setting takes effect
on a per-member basis. There is no cluster-wide accounting.
270
to roles,
to which users are then assigned. When a user is assigned a role, they
acquire
the capabilities added to that role.
[capability::accelerate_datamodel]
[capability::accelerate_search]
[capability::run_multi_phased_searches]
[capability::admin_all_objects]
[capability::change_authentication]
271
[capability::change_own_password]
* Lets a user change their own password. You can remove this capability
to control the password for a user.
[capability::delete_by_keyword]
* Lets a user use the "delete" search operator. Note that this does not
actually delete the raw data on disk, instead it masks the data
(via the index) from showing up in search results.
[capability::dispatch_rest_to_indexers]
[capability::edit_deployment_client]
[capability::edit_deployment_server]
[capability::edit_dist_peer]
[capability::edit_encryption_key_provider]
[capability::request_pstacks]
272
using a REST endpoint.
[capability::edit_forwarders]
[capability::edit_health]
[capability::edit_httpauths]
* Lets a user edit and end user sessions through the httpauth-tokens
endpoint.
[capability::edit_indexer_cluster]
[capability::edit_indexerdiscovery]
[capability::edit_input_defaults]
* Lets a user change the default hostname for input data through the
server
settings endpoint.
[capability::edit_monitor]
* Lets a user add inputs and edit settings for monitoring files.
* Also used by the standard inputs endpoint as well as the one-shot
273
input
endpoint.
[capability::edit_modinput_winhostmon]
* Lets a user add and edit inputs for monitoring Windows host data.
[capability::edit_modinput_winnetmon]
* Lets a user add and edit inputs for monitoring Windows network data.
[capability::edit_modinput_winprintmon]
* Lets a user add and edit inputs for monitoring Windows printer data.
[capability::edit_modinput_perfmon]
* Lets a user add and edit inputs for monitoring Windows performance.
[capability::edit_modinput_admon]
* Lets a user add and edit inputs for monitoring Splunk's Active
Directory.
[capability::edit_roles]
[capability::edit_roles_grantable]
* Lets the user edit roles and change user-to-role mapings for a limited
set of roles.
* To limit this ability, also assign the edit_roles_grantable
capability
and configure grantableRoles in authorize.conf. For example:
grantableRoles = role1;role2;role3. This lets user create roles using
the
subset of capabilities that the user has in their grantable_roles
274
configuration.
[capability::edit_scripted]
[capability::edit_search_head_clustering]
[capability::edit_search_scheduler]
[capability::edit_search_schedule_priority]
[capability::edit_search_schedule_window]
[capability::edit_search_server]
[capability::edit_server]
* Lets the user edit general server and introspection settings, such
as the server name, log levels, etc.
* This capability also inherits the ability to read general server
and introspection settings.
[capability::edit_server_crl]
275
[capability::edit_sourcetypes]
[capability::edit_splunktcp]
* Lets a user change settings for receiving TCP input from another
Splunk
instance.
[capability::edit_splunktcp_ssl]
* Lets a user view and edit SSL-specific settings for Splunk TCP input.
[capability::edit_splunktcp_token]
[capability::edit_tcp]
[capability::edit_telemetry_settings]
[capability::edit_token_http]
* Lets a user create, edit, display, and remove settings for HTTP token
input.
* Enables the HTTP Events Collector feature.
[capability::edit_udp]
276
[capability::edit_user]
[capability::edit_view_html]
[capability::edit_web_settings]
* Lets a user change the settings for web.conf through the system
settings
endpoint.
[capability::export_results_is_visible]
[capability::get_diag]
[capability::get_metadata]
[capability::get_typeahead]
* Enables typeahead for a user, both the typeahead endpoint and the
'typeahead' search processor.
277
[capability::indexes_edit]
* Lets a user change any index settings such as file size and memory
limits.
[capability::input_file]
[capability::license_tab]
[capability::license_edit]
[capability::license_view_warnings]
[capability::list_deployment_client]
[capability::list_deployment_server]
[capability::list_forwarders]
278
[capability::list_health]
[capability::list_httpauths]
[capability::list_indexer_cluster]
* Lets a user list indexer cluster objects such as buckets, peers, etc.
[capability::list_indexerdiscovery]
[capability::list_inputs]
* Lets a user view the list of inputs, including files, TCP, UDP,
Scripts, etc.
[capability::list_introspection]
[capability::list_search_head_clustering]
[capability::list_search_scheduler]
279
[capability::list_settings]
[capability::list_metrics_catalog]
* Lets a user list metrics catalog information such as the metric names,
dimensions, and dimension values.
[capability::list_storage_passwords]
[capability::never_lockout]
[capability::never_expire]
[capability::output_file]
[capability::request_remote_tok]
280
[capability::rest_apps_management]
* Lets a user edit settings for entries and categories in the python
remote
apps handler.
* See restmap.conf for more information.
[capability::rest_apps_view]
[capability::rest_properties_get]
[capability::rest_properties_set]
[capability::restart_splunkd]
[capability::rtsearch]
[capability::run_collect]
[capability::run_mcollect]
281
[capability::run_debug_commands]
[capability::schedule_rtsearch]
[capability::schedule_search]
* Lets a user schedule saved searches, create and update alerts, and
review triggered alert information.
[capability::search]
[capability::search_process_config_refresh]
[capability::use_file_operator]
* Lets a user use the "file" search operator. The "file" search operator
is DEPRECATED.
[capability::web_debug]
[capability::edit_statsd_transforms]
282
endpoint.
[capability::edit_metric_schema]
* Lets a user define the schema of the log data which needs to be
converted
into metric format using services/data/metric-transforms/schema
endpoint.
[capability::list_workload_pools]
* Lets a user list and view workload pool and workload status
information through
the workloads endpoint.
[capability::edit_workload_pools]
* Lets a user create and edit workload pool and workload config
information
(except workload rule) through the workloads endpoint.
[capability::select_workload_pools]
[capability::list_workload_rules]
* Lets a user list and view workload rule information from the
workload/rules
endpoint.
[capability::edit_workload_rules]
283
authorize.conf.example
# Version 7.2.1
#
# This is an example authorize.conf. Use this file to configure roles
and
# capabilities.
#
# To use one or more of these configurations, copy the configuration
block
# into authorize.conf in $SPLUNK_HOME/etc/system/local/. You must
reload
# auth or restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[role_ninja]
rtsearch = enabled
importRoles = user
srchFilter = host=foo
srchIndexesAllowed = *
srchIndexesDefault = mail;main
srchJobsQuota = 8
rtSrchJobsQuota = 8
srchDiskQuota = 500
# This creates the role 'ninja', which inherits capabilities from the
'user'
# role. ninja has almost the same capabilities as power, except cannot
# schedule searches.
#
# The search filter limits ninja to searching on host=foo.
#
# ninja is allowed to search all public indexes (those that do not start
# with underscore), and will search the indexes mail and main if no
index is
# specified in the search.
#
# ninja is allowed to run 8 search jobs and 8 real time search jobs
# concurrently (these counts are independent).
#
# ninja is allowed to take up 500 megabytes total on disk for all their
jobs.
284
checklist.conf
The following are the spec and example files for checklist.conf.
checklist.conf.spec
# Version 7.2.1
#
# This file contains the set of attributes and values you can use to
# configure checklist.conf to run health checks in Monitoring Console.
# Any health checks you add manually should be stored in your app's
local directory.
#
[<uniq-check-item-name>]
285
doc_link = <ASCII string>
* (optional) Location string for help documentation for this health
check.
* If omitted no help link will be displayed to help the user fix this
health check.
* Can be a comma separated list if more than one documentation link is
needed.
disabled = [0|1]
* Disable this check item by setting to 1.
* Defaults to 0.
286
events or the
"splunk_server" field of "| rest" search.
* In order to generate this field, please do things like:
* ... | rename host as instance
* or
* ... | rename splunk_server as instance
*
* <metric number or string> (optional) one ore more columns to "show
your work"
* This should be the data that severity_level is determined from.
* The user should be able to look at this field to get some idea of
what made the instance fail this check.
*
* <level number> (required) could be one of the following:
* - -1 (N/A) means: "Not Applicable"
* - 0 (ok) means: "all good"
* - 1 (info) means: "just ignore it if you don't
understand"
* - 2 (warning) means: "well, you'd better take a look"
* - 3 (error) means: "FIRE!"
*
* Please also note that the search string must contain either of the
following
token to properly scope to either a single instance or a group of
instances,
depending on the settings of checklistsettings.conf.
* $rest_scope$ - used for "|rest" search
* $hist_scope$ - used for historical search
checklist.conf.example
No example
collections.conf
The following are the spec and example files for collections.conf.
287
collections.conf.spec
# Version 7.2.1
#
# This file configures the KV Store collections for a given app in
Splunk.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<collection-name>]
enforceTypes = true|false
* Indicates whether to enforce data types when inserting data into the
collection.
* When set to true, invalid insert operations fail.
* When set to false, invalid insert operations drop only the invalid
field.
* Defaults to false.
field.<name> = number|bool|string|time
* Field type for a field called <name>.
* If the data type is not provided, it is inferred from the provided
JSON
data type.
accelerated_fields.<name> = <json>
* Acceleration definition for an acceleration called <name>.
* Must be a valid JSON document (invalid JSON is ignored).
* Example: 'acceleration.foo={"a":1, "b":-1}' is a compound
acceleration
that first sorts 'a' in ascending order and then 'b' in descending
order.
* There are restrictions in compound acceleration. A compound
acceleration
must not have more than one field in an array. If it does, KV Store
does
not start or work correctly.
* If multiple accelerations with the same definition are in the same
collection, the duplicates are skipped.
* If the data within a field is too large for acceleration, you will
see a
warning when you try to create an accelerated field and the
acceleration
will not be created.
288
* An acceleration is always created on the _key.
* The order of accelerations is important. For example, an acceleration
of
{ "a":1, "b":1 } speeds queries on "a" and "a" + "b", but not on "b"
lone.
* Multiple separate accelerations also speed up queries. For example,
separate accelerations { "a": 1 } and { "b": 1 } will speed up queries
on
"a" + "b", but not as well as a combined acceleration { "a":1, "b":1
}.
* Defaults to nothing (no acceleration).
profilingEnabled = true|false
* Indicates whether to enable logging of slow-running operations, as
defined
in 'profilingThresholdMs'.
* Defaults to false.
replicate = true|false
* Indicates whether to replicate this collection on indexers. When
false,
this collection is not replicated, and lookups that depend on this
collection will not be available (although if you run a lookup command
with 'local=true', local lookups will still be available). When true,
this collection is replicated on indexers.
* Defaults to false.
replication_dump_strategy = one_file|auto
* Indicates how to store dump files. When set to one_file, dump files
are
stored in a single file. When set to auto, dumps are stored in
multiple
files when the size of the collection exceeds the value of
'replication_dump_maximum_file_size'.
* Defaults to auto.
289
written
to disk, so the size of the resulting files can be affected by the
'max_rows_in_memory_per_dump' setting from 'limits.conf'.
* Defaults to 10240KB.
type = internal_cache|undefined
* Indicates the type of data that this collection holds.
* When set to 'internal_cache', changing the configuration of the
current
instance between search head cluster, search head pool, or standalone
will erase the data in the collection.
* Defaults to 'undefined'.
* For internal use only.
collections.conf.example
# Version 7.2.1
#
# The following is an example collections.conf configuration.
#
# To use one or more of these configurations, copy the configuration
block
# into collections.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note this example uses a compound acceleration. Please check
collections.conf.spec
# for restrictions on compound acceleration.
[mycollection]
field.foo = number
field.bar = string
accelerated_fields.myacceleration = {"foo": 1, "bar": -1}
commands.conf
The following are the spec and example files for commands.conf.
290
commands.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for creating search
# commands for any custom search scripts created. Add your custom
search
# script to $SPLUNK_HOME/etc/searchscripts/ or
# $SPLUNK_HOME/etc/apps/MY_APP/bin/. For the latter, put a custom
# commands.conf in $SPLUNK_HOME/etc/apps/MY_APP. For the former, put
your
# custom commands.conf in $SPLUNK_HOME/etc/system/local/.
GLOBAL SETTINGS
[<STANZA_NAME>]
291
* Set the following attributes/values for the command. Otherwise,
Splunk uses
the defaults.
* If the filename attribute is not specified, Splunk searches for an
external program by appending extensions (e.g. ".py", ".pl") to the
stanza name.
* If chunked = true, in addition to ".py" and ".pl" as above, Splunk
searches using the extensions ".exe", ".bat", ".cmd", ".sh", ".js",
and no extension (to find extensionless binaries).
* See the filename attribute for more information about how Splunk
searches for external programs.
type = <string>
* Type of script: python, perl
* Defaults to python.
filename = <string>
* Optionally specify the program to be executed when the search command
is used.
* Splunk looks for the given filename in the app's bin directory.
* The filename attribute can not reference any file outside of the app's
bin directory.
* If the filename ends in ".py", Splunk's python interpreter is used
to invoke the external script.
* If chunked = true, Splunk looks for the given filename in
$SPLUNK_HOME/etc/apps/MY_APP/<PLATFORM>/bin before searching
$SPLUNK_HOME/etc/apps/MY_APP/bin, where <PLATFORM> is one of
"linux_x86_64", "linux_x86", "windows_x86_64", "windows_x86",
"darwin_x86_64" (depending on the platform on which Splunk is
running on).
* If chunked = true and if a path pointer file (*.path) is specified,
the contents of the file are read and the result is used as the
command to be run. Environment variables in the path pointer
file are substituted. Path pointer files can be used to reference
system binaries (e.g. /usr/bin/python).
command.arg.<N> = <string>
* Additional command-line arguments to use when invoking this
program. Environment variables will be substituted (e.g.
$SPLUNK_HOME).
* Only available if chunked = true.
local = [true|false]
* If true, specifies that the command should be run on the search head
only
* Defaults to false
perf_warn_limit = <integer>
* Issue a performance warning message if more than this many input
events are
passed to this external command (0 = never)
* Defaults to 0 (disabled)
292
streaming = [true|false]
* Specify whether the command is streamable.
* Defaults to false.
maxinputs = <integer>
* Maximum number of events that can be passed to the command for each
invocation.
* This limit cannot exceed the value of maxresultrows in limits.conf.
* 0 for no limit.
* Defaults to 50000.
passauth = [true|false]
* If set to true, splunkd passes several authentication-related facts
at the start of input, as part of the header (see enableheader).
* The following headers are sent
* authString: psuedo-xml string that resembles
<auth><userId>username</userId><username>username</username><authToken>auth_token<
where the username is passed twice, and the authToken may be used
to contact splunkd during the script run.
* sessionKey: the session key again.
* owner: the user portion of the search context
* namespace: the app portion of the search context
* Requires enableheader = true; if enableheader = false, this flag will
be treated as false as well.
* Defaults to false.
* If chunked = true, this attribute is ignored. An authentication
token is always passed to commands using the chunked custom search
command protocol.
run_in_preview = [true|false]
* Specify whether to run this command if generating results just for
preview
rather than final output.
* Defaults to true
enableheader = [true|false]
* Indicate whether or not your script is expecting header information or
not.
* Currently, the only thing in the header information is an auth token.
* If set to true it will expect as input a head section + '\n' then the
csv input
* NOTE: Should be set to true if you use splunk.Intersplunk
* Defaults to true.
retainsevents = [true|false]
* Specify whether the command retains events (the way the
sort/dedup/cluster
commands do) or whether it transforms them (the way the stats command
does).
* Defaults to false.
293
generating = [true|false]
* Specify whether your command generates new events. If no events are
passed to
the command, will it generate events?
* Defaults to false.
generates_timeorder = [true|false]
* If generating = true, does command generate events in descending time
order
(latest first)
* Defaults to false.
overrides_timeorder = [true|false]
* If generating = false and streaming=true, does command change the
order of
events with respect to time?
* Defaults to false.
requires_preop = [true|false]
* Specify whether the command sequence specified by the
'streaming_preop' key
is required for proper execution or is it an optimization only
* Default is false (streaming_preop not required)
streaming_preop = <string>
* A string that denotes the requested pre-streaming search string.
required_fields = <string>
* A comma separated list of fields that this command may use.
* Informs previous commands that they should retain/extract these fields
if
possible. No error is generated if a field specified is missing.
* Defaults to '*'
supports_multivalues = [true|false]
* Specify whether the command supports multivalues.
* If true, multivalues will be treated as python lists of strings,
instead of a
flat string (when using Intersplunk to interpret stdin/stdout).
* If the list only contains one element, the value of that element will
be
returned, rather than a list
(for example, isinstance(val, basestring) == True).
supports_getinfo = [true|false]
* Specifies whether the command supports dynamic probing for settings
(first argument invoked == __GETINFO__ or __EXECUTE__).
supports_rawargs = [true|false]
* Specifies whether the command supports raw arguments being passed to
it or if
it prefers parsed arguments (where quotes are stripped).
294
* If unspecified, the default is false
undo_scheduler_escaping = [true|false]
* Specifies whether the commands raw arguments need to be unesacped.
* This is perticularly applies to the commands being invoked by the
scheduler.
* This applies only if the command supports raw
arguments(supports_rawargs).
* If unspecified, the default is false
requires_srinfo = [true|false]
* Specifies if the command requires information stored in
SearchResultsInfo.
* If true, requires that enableheader be set to true, and the full
pathname of the info file (a csv file) will be emitted in the header
under
the key 'infoPath'
* If unspecified, the default is false
needs_empty_results = [true|false]
* Specifies whether or not this search command needs to be called with
intermediate empty search results
* If unspecified, the default is true
changes_colorder = [true|false]
* Specify whether the script output should be used to change the column
ordering of the fields.
* Default is true
outputheader = <true/false>
* If set to true, output of script should be
a header section + blank line + csv output
* If false, script output should be pure csv only
* Default is false
clear_required_fields = [true|false]
* If true, required_fields represents the *only* fields required.
* If false, required_fields are additive to any fields that may be
required by
subsequent commands.
* In most cases, false is appropriate for streaming commands and true
for
reporting commands
* Default is false
stderr_dest = [log|message|none]
* What do to with the stderr output from the script
* 'log' means to write the output to the job's search.log.
* 'message' means to write each line as an search info message. The
message
level can be set to adding that level (in ALL CAPS) to the start of
295
the
line, e.g. "WARN my warning message."
* 'none' means to discard the stderr output
* Defaults to log
is_order_sensitive = [true|false]
* Specify whether the command requires ordered input.
* Defaults to false.
is_risky = [true|false]
* Searches using Splunk Web are flagged to warn users when they
unknowingly run a search that contains commands that might be a
security risk. This warning appears when users click a link or type
a URL that loads a search that contains risky commands. This warning
does not appear when users create ad hoc searches.
* This flag is used to determine whether the command is risky.
* Defaults to false.
* - Specific commands that ship with the product have their own defaults
chunked = [true|false]
* If true, this command supports the new "chunked" custom
search command protocol.
* If true, the only other commands.conf attributes supported are
is_risky, maxwait, maxchunksize, filename, and command.arg.<N>.
* If false, this command uses the legacy custom search command
protocol supported by Intersplunk.py.
* Default is false
maxwait = <integer>
* Only available if chunked = true.
* Not supported in Windows.
* The value of maxwait is the maximum number of seconds the custom
search command can pause before producing output.
* If set to 0, the command can pause forever.
* Default is 0
maxchunksize = <integer>
* Only available if chunked = true.
* The value of maxchunksize is maximum size chunk (size of metadata
plus size of body) the external command may produce. If the command
tries to produce a larger chunk, the command is terminated.
* If set to 0, the command may send any size chunk.
* Default is 0
commands.conf.example
# Version 7.2.1
#
# This is an example commands.conf. Use this file to configure
296
settings
# for external search commands.
#
# To use one or more of these configurations, copy the configuration
block
# into commands.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence)
# see the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own
# customizations.
##############
# defaults for all external commands, exceptions are below in
# individual stanzas
# is command streamable?
streaming = false
# end defaults
#####################
[crawl]
filename = crawl.py
[createrss]
filename = createrss.py
[diff]
filename = diff.py
[gentimes]
filename = gentimes.py
[head]
filename = head.py
297
[loglady]
filename = loglady.py
[marklar]
filename = marklar.py
[runshellscript]
filename = runshellscript.py
[sendemail]
filename = sendemail.py
[translate]
filename = translate.py
[transpose]
filename = transpose.py
[uniq]
filename = uniq.py
[windbag]
filename = windbag.py
supports_multivalues = true
[xmlkv]
filename = xmlkv.py
[xmlunescape]
filename = xmlunescape.py
datamodels.conf
The following are the spec and example files for datamodels.conf.
datamodels.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for configuring
# data models. To configure a datamodel for an app, put your custom
# datamodels.conf in $SPLUNK_HOME/etc/apps/MY_APP/local/
298
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<datamodel_name>]
* Each stanza represents a data model. The data model name is the stanza
name.
acceleration = <bool>
* Set acceleration to true to enable automatic acceleration of this data
model.
* Automatic acceleration creates auxiliary column stores for the fields
and values in the events for this datamodel on a per-bucket basis.
* These column stores take additional space on disk, so be sure you
have the
proper amount of disk space. Additional space required depends on the
number of events, fields, and distinct field values in the data.
* The Splunk software creates and maintains these column stores on a
schedule
you can specify with 'acceleration.cron_schedule.' You can query
them with the 'tstats' command.
acceleration.earliest_time = <relative-time-str>
* Specifies how far back in time the Splunk software should keep these
column
stores (and create if acceleration.backfill_time is not set).
299
* Specified by a relative time string. For example, '-7d' means
'accelerate
data within the last 7 days.'
* Defaults to an empty string, meaning 'keep these stores for all time.'
acceleration.backfill_time = <relative-time-str>
* ADVANCED: Specifies how far back in time the Splunk software should
create
its column stores.
* ONLY set this parameter if you want to backfill less data than the
retention period set by 'acceleration.earliest_time'. You may want to
use
this parameter to limit your time window for column store creation in
a large
environment where initial creation of a large set of column stores is
an
expensive operation.
* WARNING: Do not set 'acceleration.backfill_time' to a
narrow time window. If one of your indexers is down for a period
longer
than this backfill time, you may miss accelerating a window of your
incoming
data.
* MUST be set to a more recent time than 'acceleration.earliest_time'.
For
example, if you set 'acceleration.earliest_time' to '-1y' to retain
your
column stores for a one year window, you could set
'acceleration.backfill_time'
to '-20d' to create column stores that only cover the last 20 days.
However,
you cannot set 'acceleration.backfill_time' to '-2y', because that
goes
farther back in time than the 'acceleration.earliest_time' setting of
'-1y'.
* Defaults to empty string (unset). When 'acceleration.backfill_time'
is unset,
the Splunk software always backfills fully to
'acceleration.earliest_time.'
acceleration.poll_buckets_until_maxtime = <bool>
* In a distributed environment that consist of heterogenous machines,
summarizations might complete sooner
300
on machines with less data and faster resources. After the
summarization search is finished with all of
the buckets, the search ends. However, the overall search runtime is
determined by the slowest machine in the
environment.
* When set to "true": All of the machines run for "max_time"
(approximately).
The buckets are polled repeatedly for new data to summarize
* Set this to true if your data model is sensitive to summarization
latency delays.
* When this setting is enabled, the summarization search is counted
against the
number of concurrent searches you can run until "max_time" is reached.
* Default: false
acceleration.cron_schedule = <cron-string>
* Cron schedule to be used to probe/generate the column stores for this
data model.
* Defaults to: */5 * * * *
acceleration.manual_rebuilds = <bool>
* ADVANCED: When set to 'true,' this setting prevents outdated summaries
from
being rebuilt by the 'summarize' command.
* Normally, during the creation phase, the 'summarize' command
automatically
rebuilds summaries that are considered to be out-of-date, such as when
the
configuration backing the data model changes.
* The Splunk software considers a summary to be outdated when:
* The data model search stored in its metadata no longer matches
its current
data model search.
* The search stored in its metadata cannot be parsed.
* NOTE: If the Splunk software finds a partial summary be outdated, it
always
rebuilds that summary so that a bucket summary only has results
corresponding to
one datamodel search.
* Defaults to: false
acceleration.allow_skew = <percentage>|<duration-specifier>
* Allows the search scheduler to randomly distribute scheduled searches
more
evenly over their periods.
* When set to non-zero for searches with the following cron_schedule
values,
301
the search scheduler randomly "skews" the second, minute, and hour
that the
search actually runs on:
* * * * * Every minute.
*/M * * * * Every M minutes (M > 0).
0 * * * * Every hour.
0 */H * * * Every H hours (H > 0).
0 0 * * * Every day (at midnight).
* When set to non-zero for a search that has any other cron_schedule
setting,
the search scheduler can only randomly "skew" the second that the
search runs
on.
* The amount of skew for a specific search remains constant between
edits of
the search.
* An integer value followed by '%' (percent) specifies the maximum
amount of
time to skew as a percentage of the scheduled search period.
* Otherwise, use <int><unit> to specify a maximum duration. Relevant
units
are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours, d, day,
days.
(The <unit> may be omitted only when <int> is 0.)
* Examples:
100% (for an every-5-minute search) = 5 minutes maximum
50% (for an every-minute search) = 30 seconds maximum
5m = 5 minutes maximum
1h = 1 hour maximum
* A value of 0 disallows skew.
* Default is 0.
302
the 'edit_search_schedule_priority' capability.
* Defaults to: default
* WARNING: Having too many searches with a non-default priority will
impede the
ability of the scheduler to minimize search starvation. Use this
setting
only for mission-critical searches.
acceleration.hunk.compression_codec = <string>
* Applicable only to Hunk Data models. Specifies the compression codec
to
be used for the accelerated orc/parquet files.
acceleration.hunk.file_format = <string>
* Applicable only to Hunk data models. Valid options are "orc" and
"parquet"
dataset.description = <string>
* User-entered description of the dataset entity.
dataset.type = [datamodel|table]
* The type of dataset:
+ "datamodel": An individual data model dataset.
+ "table": A special root data model dataset with a search where the
dataset is
defined by the dataset.commands attribute.
* Default: datamodel
dataset.display.diversity = [latest|random|diverse|rare]
303
* The user-selected diversity for previewing events contained by the
dataset:
+ "latest": search a subset of the latest events
+ "random": search a random sampling of events
+ "diverse": search a diverse sampling of events
+ "rare": search a rare sampling of events based on clustering
* Default: latest
dataset.display.sample_ratio = <int>
* The integer value used to calculate the sample ratio for the dataset
diversity.
The formula is 1 / <int>.
* The sample ratio specifies the likelihood of any event being included
in the
sample.
* For example, if sample_ratio = 500 each event has a 1/500 chance of
being
included in the sample result set.
* Default: 1
dataset.display.limiting = <int>
* The limit of events to search over when previewing the dataset.
* Default: 100000
dataset.display.currentCommand = <int>
* The currently selected command the user is on while editing the
dataset.
dataset.display.mode = [table|datasummary]
* The type of preview to use when editing the dataset:
+ "table": show individual events/results as rows.
+ "datasummary": show field values as columns.
* Default: table
dataset.display.datasummary.earliestTime = <time-str>
* The earliest time used for the search that powers the datasummary view
of
the dataset.
dataset.display.datasummary.latestTime = <time-str>
* The latest time used for the search that powers the datasummary view
of
the dataset.
tags_whitelist = <list-of-tags>
* A comma-separated list of tag fields that the data model requires
for its search result sets.
* This is a search performance setting. Apply it only to data models
that use a significant number of tag field attributes in their
definitions. Data models without tag fields cannot use this setting.
This setting does not recognize tags used in constraint searches.
* Only the tag fields identified by tag_whitelist (and the event types
304
tagged by them) are loaded when searches are performed with this
data model.
* When you update tags_whitelist for an accelerated data model,
the Splunk software rebuilds the data model unless you have
enabled accleration.manual_rebuild for it.
* If tags_whitelist is empty, the Splunk software attempts to optimize
out unnecessary tag fields when searches are performed with this
data model.
* Defaults to empty.
datamodels.conf.example
# Version 7.2.1
#
# Configuration for example datamodels
#
datatypesbnf.conf
The following are the spec and example files for datatypesbnf.conf.
datatypesbnf.conf.spec
# Version 7.2.1
#
# This file effects how the search assistant (typeahead) shows the
syntax for
# search commands
305
[<syntax-type>]
syntax = <string>
* The syntax for you syntax type.
* Should correspond to a regular expression describing the term.
* Can also be a <field> or other similar value.
datatypesbnf.conf.example
No example
default.meta.conf
The following are the spec and example files for default.meta.conf.
default.meta.conf.spec
# Version 7.2.1
#
#
# *.meta files contain ownership information, access controls, and
export
# settings for Splunk objects like saved searches, event types, and
views.
# Each app has its own default.meta file.
306
* read access to the app, to locate the object
* read access to the generic category within the app (eg.
[savedsearches])
* If object does not permit write access to the user, the object will
not be
modifiable.
* If any layer does not permit read access to the user, the object will
not be
accessible in order to modify
[views]
[views/index_status]
307
export = system
* To make this view available only in this app, set 'export = none'
instead.
owner = admin
* Set admin as the owner of this view.
default.meta.conf.example
# Version 7.2.1
#
# This file contains example patterns for the metadata files
default.meta and
# local.meta
#
default-mode.conf
The following are the spec and example files for default-mode.conf.
default-mode.conf.spec
# Version 7.2.1
#
# This file documents the syntax of default-mode.conf for comprehension
and
# troubleshooting purposes.
# CAVEATS:
308
#
# default-mode.conf *will* be removed in a future version of Splunk,
along
# with the entire configuration scheme that it affects. Any settings
present
# in default-mode.conf files will be completely ignored at this point.
#
# Any number of seemingly reasonable configurations in
default-mode.conf
# might fail to work, behave bizarrely, corrupt your data, iron your
cat,
# cause unexpected rashes, or order unwanted food delivery to your
house.
# Changes here alter the way that pieces of code will communicate which
are
# only intended to be used in a specific configuration.
# INFORMATION:
# The main value of this spec file is to assist in reading these files
for
# troubleshooting purposes. default-mode.conf was originally intended
to
# provide a way to describe the alternate setups used by the Splunk
Light
# Forwarder and Splunk Universal Forwarder.
# SYNTAX:
[pipeline:<string>]
[pipeline:<string>]
309
external
for the purposes of configuration.
* Useful information on the data processing system of splunk can be
found
in the external documentation, for example
http://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline
default-mode.conf.example
No example
deployment.conf
The following are the spec and example files for deployment.conf.
deployment.conf.spec
# Version 7.2.1
#
# *** REMOVED; NO LONGER USED ***
#
#
# This configuration file has been replaced by:
# 1.) deploymentclient.conf - for configuring Deployment Clients.
# 2.) serverclass.conf - for Deployment Server server class
configuration.
#
#
# Compatibility:
# Splunk 4.x Deployment Server is NOT compatible with Splunk 3.x
Deployment Clients.
310
#
deployment.conf.example
No example
deploymentclient.conf
The following are the spec and example files for deploymentclient.conf.
deploymentclient.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values for configuring a
# deployment client to receive content (apps and configurations) from a
# deployment server.
#
# To customize the way a deployment client behaves, place a
# deploymentclient.conf in $SPLUNK_HOME/etc/system/local/ on that
Splunk
# instance. Configure what apps or configuration content is deployed to
a
# given deployment client in serverclass.conf. Refer to
# serverclass.conf.spec and serverclass.conf.example for more
information.
#
# You must restart Splunk for changes to this configuration file to
take
# effect.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#***************************************************************************
# Configure a Splunk deployment client.
#
# Note: At a minimum the [deployment-client] stanza is required in
# deploymentclient.conf for deployment client to be enabled.
#***************************************************************************
311
GLOBAL SETTINGS
[deployment-client]
disabled = [false|true]
* Defaults to false
* Enable/Disable deployment client.
clientName = deploymentClient
* Defaults to deploymentClient.
* A name that the deployment server can filter on.
* Takes precedence over DNS names.
workingDir = $SPLUNK_HOME/var/run
* Temporary folder used by the deploymentClient to download apps and
configuration content.
repositoryLocation = $SPLUNK_HOME/etc/apps
* The location into which content is installed after being downloaded
from a
deployment server.
* Apps and configuration content must be installed into the default
location
($SPLUNK_HOME/etc/apps) or it will not be recognized by
the Splunk instance on the deployment client.
* Note: Apps and configuration content to be deployed may be
located in
an alternate location on the deployment server. Set both
repositoryLocation and serverRepositoryLocationPolicy explicitly
to
ensure that the content is installed into the correct location
($SPLUNK_HOME/etc/apps) on the deployment clientr
* The deployment client uses the 'serverRepositoryLocationPolicy'
defined below to determine which value of repositoryLocation to
312
use.
serverRepositoryLocationPolicy =
[acceptSplunkHome|acceptAlways|rejectAlways]
* Defaults to acceptSplunkHome.
* acceptSplunkHome - accept the repositoryLocation supplied by the
deployment server, only if it is rooted by
$SPLUNK_HOME.
* acceptAlways - always accept the repositoryLocation supplied by the
deployment server.
* rejectAlways - reject the server supplied value and use the
repositoryLocation specified in the local
deploymentclient.conf.
endpoint=$deploymentServerUri$/services/streams/deployment?name=$serverClassName$:$appNa
* The HTTP endpoint from which content should be downloaded.
* Note: The deployment server may specify a different endpoint from
which to
download each set of content (individual apps, etc).
* The deployment client will use the serverEndpointPolicy defined below
to
determine which value to use.
* $deploymentServerUri$ will resolve to targetUri defined in the
[target-broker] stanza below.
* $serverClassName$ and $appName$ mean what they say.
serverEndpointPolicy = [acceptAlways|rejectAlways]
* defaults to acceptAlways
* acceptAlways - always accept the endpoint supplied by the server.
* rejectAlways - reject the endpoint supplied by the server. Always use
the
'endpoint' definition above.
handshakeReplySubscriptionRetry = <integer>
* Defaults to 10
* If splunk is unable to complete the handshake, it will retry
subscribing to
the handshake channel after this many handshake attempts
313
appEventsResyncIntervalInSecs = <number in seconds>
* Defaults to 10*phoneHomeIntervalInSecs
* Fractional seconds are allowed.
* This sets the interval at which the client reports back its app state
to the server.
# Advanced!
# You should use this property only when you have a hierarchical
deployment
# server installation, and have a Splunk instance that behaves as both a
# DeploymentClient and a DeploymentServer.
reloadDSOnAppInstall = [false|true]
* Defaults to false
* Setting this flag to true will cause the deploymentServer on this
Splunk
instance to be reloaded whenever an app is installed by this
deploymentClient.
sslVersions = <versions_list>
* Comma-separated list of SSL versions to connect to the specified
Deployment Server
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Defaults to sslVersions value in server.conf [sslConfig] stanza.
sslVerifyServerCert = <bool>
* If this is set to true, Splunk verifies that the Deployment Server
(specified in 'targetUri')
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in 'sslCommonNameToCheck' and
'sslAltNameToCheck'.
A certificiate is considered verified if either is matched.
* Defaults to sslVerifyServerCert value in server.conf [sslConfig]
stanza.
caCertFile = <path>
* Full path to a CA (Certificate Authority) certificate(s) PEM format
file.
314
* The <path> must refer to a PEM format file containing one or more
root CA
certificates concatenated together.
* Used for validating SSL certificate from Deployment Server
* Defaults to caCertFile value in server.conf [sslConfig] stanza.
315
recv_timeout = <positive integer>
* The amount of time, in seconds, that a deployment client can take to
receive
or read data from a deployment server before the server connection
times out.
* Defaults to 60.
[target-broker:deploymentServer]
targetUri= <uri>
* An example of <uri>: <scheme>://<deploymentServer>:<mgmtPort>
* URI of the deployment server.
deploymentclient.conf.example
# Version 7.2.1
#
# Example 1
# Deployment client receives apps and places them into the same
# repositoryLocation (locally, relative to $SPLUNK_HOME) as it picked
them
# up from. This is typically $SPLUNK_HOME/etc/apps. There
# is nothing in [deployment-client] because the deployment client is
not
# overriding the value set on the deployment server side.
[deployment-client]
[target-broker:deploymentServer]
316
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 2
# Deployment server keeps apps to be deployed in a non-standard location
on
# the server side (perhaps for organization purposes).
# Deployment client receives apps and places them in the standard
location.
# Note: Apps deployed to any location other than
# $SPLUNK_HOME/etc/apps on the deployment client side will
# not be recognized and run.
# This configuration rejects any location specified by the deployment
server
# and replaces it with the standard client-side location.
[deployment-client]
serverRepositoryLocationPolicy = rejectAlways
repositoryLocation = $SPLUNK_HOME/etc/apps
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 3
# Deployment client should get apps from an HTTP server that is
different
# from the one specified by the deployment server.
[deployment-client]
serverEndpointPolicy = rejectAlways
endpoint =
http://apache.mycompany.server:8080/$serverClassName$/$appName$.tar
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 4
# Deployment client should get apps from a location on the file system
and
# not from a location specified by the deployment server
[deployment-client]
serverEndpointPolicy = rejectAlways
endpoint = file:/<some_mount_point>/$serverClassName$/$appName$.tar
handshakeRetryIntervalInSecs=20
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 5
317
# Deployment client should phonehome server for app updates quicker
# Deployment client should only send back appEvents once a day
[deployment-client]
phoneHomeIntervalInSecs=30
appEventsResyncIntervalInSecs=86400
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 6
# Sets the deployment client connection/transaction timeouts to 1
minute.
# Deployment clients terminate connections if deployment server does not
reply.
[deployment-client]
connect_timeout=60
send_timeout=60
recv_timeout=60
distsearch.conf
The following are the spec and example files for distsearch.conf.
distsearch.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values you can use to
configure
# distributed search.
#
# To set custom configurations, place a distsearch.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
distsearch.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# These attributes are all configured on the search head, with the
318
exception of
# the optional attributes listed under the SEARCH HEAD BUNDLE MOUNTING
OPTIONS
# heading, which are configured on the search peers.
GLOBAL SETTINGS
[distributedSearch]
* Set distributed search configuration options under this stanza name.
* Follow this stanza name with any number of the following
attribute/value
pairs.
* If you do not set any attribute, Splunk uses the default value (if
there
is one listed).
disabled = [true|false]
* Toggle distributed search off (true) and on (false).
* Defaults to false (your distributed search stanza is enabled by
default).
heartbeatPort = <port>
* This setting is deprecated
ttl = <integer>
* This setting is deprecated
319
* Set connection timeout when gathering a search peer's basic
info (/services/server/info).
* Note: Read/write timeouts are automatically set to twice this value.
* Defaults to 10.
removedTimedOutServers = [true|false]
* This setting is no longer supported, and will be ignored.
autoAddServers = [true|false]
* This setting is deprecated
bestEffortSearch = [true|false]
* Whether to remove a peer from search when it does not have any of our
bundles.
* If set to true searches will never block on bundle replication, even
when a
peer is first added - the peers that don't have any common bundles
will
simply not be searched.
* Defaults to false
skipOurselves = [true|false]
* This setting is deprecated
320
peers. Any
realtime searches started after the peer has been quarantined will not
contact the peer.
* Whenever a quarantined peer is excluded from search, appropriate
warnings will be displayed
in the search.log and Job Inspector
useDisabledListAsBlacklist = <boolean>
* Whether or not the search head treats the ?disabled_servers? setting
as a blacklist.
* If set to ?true?, search peers that appear in both the ?servers? and
?disabled_servers?
lists are disabled and do not participate in search.
* If set to ?false?, search peers that appear in both lists are treated
as enabled, despite
being in the ?disabled_servers? list. These search peers do
participate in search.
* Default: false
shareBundles = [true|false]
* Indicates whether this server will use bundle replication to share
search
time configuration with search peers.
* If set to false, the search head assumes that all the search peers
can access
the correct bundles via share storage and have configured the options
listed
under "SEARCH HEAD BUNDLE MOUNTING OPTIONS".
* Defaults to true.
useSHPBundleReplication = <bool>|always
* Relevant only in search head pooling environments. Whether the search
heads
in the pool should compete with each other to decide which one should
handle
the bundle replication (every time bundle replication needs to happen)
or
whether each of them should individually replicate the bundles.
* When set to always and bundle mounting is being used then use the
search head
pool guid rather than each individual server name to identify
bundles (and
search heads to the remote peers).
* Defaults to true
trySSLFirst = <bool>
* This setting is no longer supported, and will be ignored.
peerResolutionThreads = <int>
* This setting is no longer supported, and will be ignored.
defaultUriScheme = [http|https]
321
* When a new peer is added without specifying a scheme for the uri to
its management
port we will use this scheme by default.
* Defaults to https
322
[tokenExchKeys]
certDir = <directory>
* This directory contains the local Splunk instance's distributed search
key
pair.
* This directory also contains the public keys of servers that
distribute
searches to this Splunk instance.
publicKey = <filename>
* Name of public key file for this Splunk instance.
privateKey = <filename>
* Name of private key file for this Splunk instance.
genKeyScript = <command>
* Command used to generate the two files above.
[replicationSettings]
323
maxMemoryBundleSize = <int>
* The maximum size (in MB) of bundles to hold in memory. If the bundle
is
larger than this the bundles will be read and encoded on the fly for
each
peer the replication is taking place.
* Defaults to 10
maxBundleSize = <int>
* The maximum size (in MB) of the bundle for which replication can
occur. If
the bundle is larger than this bundle replication will not occur and
an
error message will be logged.
* Defaults to: 2048 (2GB)
concerningReplicatedFileSize = <int>
* Any individual file within a bundle that is larger than this value (in
MB)
will trigger a splunkd.log message.
* Where possible, avoid replicating such files, e.g. by customizing your
blacklists.
* Defaults to: 500
excludeReplicatedLookupSize = <int>
* Any lookup file larger than this value (in MB) will be excluded from
the knowledge bundle that the search head replicates to its search
peers.
* When this value is set to 0, this feature is disabled.
* Defaults to 0
allowSkipEncoding = <bool>
* Whether to avoid URL-encoding bundle data on upload.
* Defaults to: true
324
allowDeltaUpload = <bool>
* Whether to enable delta-based bundle replication.
* Defaults to: true
sanitizeMetaFiles = <bool>
* Whether to sanitize or filter *.meta files before replication.
* This feature can be used to avoid unnecessary replications triggered
by
writes to *.meta files that have no real effect on search behavior.
* The types of stanzas that "survive" filtering are configured via the
replicationSettings:refineConf stanza.
* The filtering process removes comments and cosmetic whitespace.
* Defaults to: true
enableRFSReplication = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Required on search heads.
* When search heads generate bundles, these bundles are uploaded to
the configured remote file system.
* When search heads delete their old bundles, they subsequently
attempt to delete the bundle from the configured remote file system.
* If set to true, remote file system bundle replication is enabled.
* Default: false.
enableRFSMonitoring = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Required on search peers.
* Search peers periodically monitor the configured remote file system
and download any bundles that they do not have on disk.
* If set to true, remote file system bundle monitoring is enabled.
* Default: false.
325
rfsSyncReplicationTimeout = <unsigned int>
* Currently not supported. This setting is related to a feature that is
still under development.
* The amount of time, in seconds, that a search head waits for
synchronous
replication to complete. Only applies to RFS bundle replication.
* Default value is computed from 'rfsMonitoringPeriod', i.e.
(rfsMonitoringPeriod + 60)) * 5, where 60 is the non-configurable
search
head to search peer polling interval, and 5 is arbitrary multiplier.
If 'rfsMonitoringPeriod' is not modified, default value is 600.
* Default: auto.
remote.s3.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote storage system supporting the S3 API.
* The protocol, http or https, can be used to enable or disable SSL
connectivity with the endpoint.
* If not specified and the indexer is running on EC2, the endpoint will
be
constructed automatically based on the EC2 region of the instance
where
the indexer is running, as follows: https://s3-<region>.amazonaws.com
* Example: https://s3-us-west-2.amazonaws.com
326
* Specifies the schema to use for Server-side Encryption (SSE) for data
at rest.
* sse-s3: See:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
* none: Server-side encryption is disabled. Data is stored unencrypted
on the
remote storage.
* Default: none
[replicationSettings:refineConf]
replicate.<conf_file_name> = <bool>
* Controls whether Splunk replicates a particular type of *.conf file,
along
with any associated permissions in *.meta files.
* These settings on their own do not cause files to be replicated. A
file must
still be whitelisted (via replicationWhitelist) to be eligible for
inclusion
via these settings.
* In a sense, these settings constitute another level of filtering that
applies
specifically to *.conf files and stanzas with *.meta files.
* Defaults to: false
[replicationWhitelist]
<name> = <whitelist_pattern>
* Controls Splunk's search-time conf replication from search heads to
search
nodes.
* Only files that match a whitelist entry will be replicated.
* Conversely, files which are not matched by any whitelist will not be
replicated.
* Only files located under $SPLUNK_HOME/etc will ever be replicated in
this
way.
* The regex will be matched against the filename, relative to
$SPLUNK_HOME/etc.
Example: for a file
"$SPLUNK_HOME/etc/apps/fancy_app/default/inputs.conf"
this whitelist should match
"apps/fancy_app/default/inputs.conf"
* Similarly, the etc/system files are available as system/...
user-specific files are available as users/username/appname/...
327
* The 'name' element is generally just descriptive, with one exception:
if <name> begins with "refine.", files whitelisted by the given
pattern will
also go through another level of filtering configured in the
replicationSettings:refineConf stanza.
* The whitelist_pattern is the Splunk-style pattern matching, which is
primarily regex-based with special local behavior for '...' and '*'.
* ... matches anything, while * matches anything besides directory
separators.
See props.conf.spec for more detail on these.
* Note '.' will match a literal dot, not any character.
* Note that these lists are applied globally across all conf data, not
to any
particular app, regardless of where they are defined. Be careful to
pull in
only your intended files.
[replicationBlacklist]
<name> = <blacklist_pattern>
* All comments from the replication whitelist notes above also apply
here.
* Replication blacklist takes precedence over the whitelist, meaning
that a
file that matches both the whitelist and the blacklist will NOT be
replicated.
* This can be used to prevent unwanted bundle replication in two common
scenarios:
* Very large files, which part of an app may not want to be
replicated,
especially if they are not needed on search nodes.
* Frequently updated files (for example, some lookups) will trigger
retransmission of all search head data.
* Note that these lists are applied globally across all conf data.
Especially
for blacklisting, be careful to constrain your blacklist to match only
data
your application will not need.
328
[bundleEnforcerWhitelist]
<name> = <whitelist_pattern>
* Peers uses this to make sure knowledge bundle sent by search heads
and
masters do not contain alien files.
* If this stanza is empty, the receiver accepts the bundle unless it
contains
files matching the rules specified in [bundleEnforcerBlacklist].
Hence, if
both [bundleEnforcerWhitelist] and [bundleEnforcerBlacklist] are
empty (which
is the default), then the receiver accepts all bundles.
* If this stanza is not empty, the receiver accepts the bundle only if
it
contains only files that match the rules specified here but not those
in
[bundleEnforcerBlacklist].
* All rules are regexs.
* This stanza is empty by default.
[bundleEnforcerBlacklist]
<name> = <blacklist_pattern>
* Peers uses this to make sure knowledge bundle sent by search heads
and
masters do not contain alien files.
* This list overrides [bundleEnforceWhitelist] above. That means the
receiver
rejects (i.e. removes) the bundle if it contains any file that matches
the
rules specified here even if that file is allowed by
[bundleEnforcerWhitelist].
* If this stanza is empty, then only [bundleEnforcerWhitelist] matters.
* This stanza is empty by default.
# You set these attributes on the search peers only, and only if you
also set
329
# shareBundles=false in [distributedSearch] on the search head. Use
them to
# achieve replication-less bundle access. The search peers use a shared
storage
# mountpoint to access the search head bundles ($SPLUNK_HOME/etc).
#******************************************************************************
[searchhead:<searchhead-splunk-server-name>]
* <searchhead-splunk-server-name> is the name of the related searchhead
installation.
* This setting is located in server.conf, serverName = <name>
mounted_bundles = [true|false]
* Determines whether the bundles belong to the search head specified in
the
stanza name are mounted.
* You must set this to "true" to use mounted bundles.
* Default is "false".
bundles_location = <path_to_bundles>
* The path to where the search head's bundles are mounted. This must be
the
mountpoint on the search peer, not on the search head. This should
point to
a directory that is equivalent to $SPLUNK_HOME/etc/. It must contain
at least
the following subdirectories: system, apps, users.
[distributedSearch:<splunk-server-group-name>]
* <splunk-server-group-name> is the name of the splunk-server-group
that is
defined in this stanza
330
of peer identifiers i.e. hostname:port
default = [true|false]
* Will set this as the default group of peers against which all
searches are
run unless a server-group is not explicitly specified.
distsearch.conf.example
# Version 7.2.1
#
# These are example configurations for distsearch.conf. Use this file
to
# configure distributed search. For all available attribute/value
pairs, see
# distsearch.conf.spec.
#
# There is NO DEFAULT distsearch.conf.
#
# To use one or more of these configurations, copy the configuration
block into
# distsearch.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[distributedSearch]
servers = https://192.168.1.1:8059,https://192.168.1.2:8059
# this stanza controls what files are replicated to the other peer each
is a
# regex
331
[replicationWhitelist]
allConf = *.conf
eventdiscoverer.conf
The following are the spec and example files for eventdiscoverer.conf.
eventdiscoverer.conf.spec
# Version 7.2.1
# This file contains possible attributes and values you can use to
configure
# event discovery through the search command "typelearner."
#
# There is an eventdiscoverer.conf in $SPLUNK_HOME/etc/system/default/.
To set
# custom configurations, place an eventdiscoverer.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# eventdiscoverer.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
332
GLOBAL SETTINGS
333
eventdiscoverer.conf.example
# Version 7.2.1
#
# This is an example eventdiscoverer.conf. These settings are used to
control
# the discovery of common eventtypes used by the typelearner search
command.
#
# To use one or more of these configurations, copy the configuration
block into
# eventdiscoverer.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
event_renderers.conf
The following are the spec and example files for event_renderers.conf.
event_renderers.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for configuring
event rendering properties.
#
# Beginning with version 6.0, Splunk Enterprise does not support the
# customization of event displays using event renderers.
#
# There is an event_renderers.conf in $SPLUNK_HOME/etc/system/default/.
To set custom configurations,
# place an event_renderers.conf in $SPLUNK_HOME/etc/system/local/, or
334
your own custom app directory.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<name>]
css_class = <css class name suffix to apply to the parent event element
class attribute>
* This can be any valid css class value.
* The value is appended to a standard suffix string of "splEvent-". A
css_class value of foo would
result in the parent element of the event having an html attribute
class with a value of splEvent-foo
(for example, class="splEvent-foo"). You can externalize your css style
rules for this in
$APP/appserver/static/application.css. For example, to make the text
red you would add to
application.css:.splEvent-foo { color:red; }
335
event_renderers.conf.example
# Version 7.2.1
# DO NOT EDIT THIS FILE!
# Please make all changes to files in $SPLUNK_HOME/etc/system/local.
# To make changes, copy the section/stanza you want to change from
$SPLUNK_HOME/etc/system/default
# into ../local and edit there.
#
# This file contains mappings between Splunk eventtypes and event
renderers.
#
# Beginning with version 6.0, Splunk Enterprise does not support the
# customization of event displays using event renderers.
#
[event_renderer_1]
eventtype = hawaiian_type
priority = 1
css_class = EventRenderer1
[event_renderer_2]
eventtype = french_food_type
priority = 1
template = event_renderer2.html
css_class = EventRenderer2
[event_renderer_3]
eventtype = japan_type
priority = 1
css_class = EventRenderer3
eventtypes.conf
The following are the spec and example files for eventtypes.conf.
eventtypes.conf.spec
# Version 7.2.1
#
# This file contains all possible attributes and value pairs for an
# eventtypes.conf file. Use this file to configure event types and
336
their
# properties. You can also pipe any search to the "typelearner" command
to
# create event types. Event types created this way will be written to
# $SPLUNK_HOME/etc/system/local/eventtypes.conf.
#
# There is an eventtypes.conf in $SPLUNK_HOME/etc/system/default/. To
set
# custom configurations, place an eventtypes.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
eventtypes.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<$EVENTTYPE>]
337
event type with the header [cisco-%code%] that has "code=432" becomes
labeled "cisco-432".
disabled = [1|0]
* Toggle event type on or off.
* Set to 1 to disable.
search = <string>
* Search terms for this event type.
* For example: error OR warn.
* NOTE: You cannot base an event type on:
* A search that includes a pipe operator (a "|" character).
* A subsearch (a search pipeline enclosed in square brackets).
* A search referencing a report. This is a best practice. Any report
that is referenced by an
event type can later be updated in a way that makes it invalid as an
event type. For example,
a report that is updated to include transforming commands cannot be
used as the definition for
an event type. You have more control over your event type if you
define it with the same search
string as the report.
description = <string>
* Optional human-readable description of this saved search.
tags = <string>
* DEPRECATED - see tags.conf.spec
color = <string>
* color for this event type.
* Supported colors: none, et_blue, et_green, et_magenta, et_orange,
et_purple, et_red, et_sky, et_teal, et_yellow
eventtypes.conf.example
# Version 7.2.1
#
# This file contains an example eventtypes.conf. Use this file to
configure custom eventtypes.
#
# To use one or more of these configurations, copy the configuration
338
block into eventtypes.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
[error]
search = error OR fatal
[cisco-%code%]
search = cisco
fields.conf
The following are the spec and example files for fields.conf.
fields.conf.spec
# Version 7.2.1
#
# This file contains possible attribute and value pairs for:
# * Telling Splunk how to handle multi-value fields.
# * Distinguishing indexed and extracted fields.
# * Improving search performance by telling the search processor how to
# handle field values.
# Use this file if you are creating a field at index time (not
advised).
339
#
# There is a fields.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a fields.conf in
$SPLUNK_HOME/etc/system/local/. For
# examples, see fields.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<field name>]
340
* Use this setting to configure multivalue fields (refer to the online
documentation for multivalue fields).
* A regular expression that indicates how the field can take on multiple
values
at the same time.
* If empty, the field can only take on a single value.
* Otherwise, the first group is taken from each match to form the set of
values.
* This setting is used by the "search" and "where" commands, the summary
and
XML outputs of the asynchronous search API, and by the top, timeline
and
stats commands.
* Tokenization of indexed fields (INDEXED = true) is not supported so
this
attribute is ignored for indexed fields.
* Default to empty.
INDEXED = [true|false]
* Indicate whether a field is indexed or not.
* Set to true if the field is indexed.
* Set to false for fields extracted at search time (the majority of
fields).
* Defaults to false.
INDEXED_VALUE = [true|false|<sed-cmd>|<simple-substitution-string>]
* Set this to true if the value is in the raw text of the event.
* Set this to false if the value is not in the raw text of the event.
* Setting this to true expands any search for key=value into a search of
value AND key=value (since value is indexed).
* For advanced customization, this setting supports sed style
substitution.
For example, 'INDEXED_VALUE=s/foo/bar/g' would take the value of the
field,
replace all instances of 'foo' with 'bar,' and use that new value as
the
value to search in the index.
* This setting also supports a simple substitution based on looking for
the
literal string '<VALUE>' (including the '<' and '>' characters).
For example, 'INDEXED_VALUE=source::*<VALUE>*' would take a search
for
'myfield=myvalue' and search for 'source::*myvalue*' in the index as
a
single term.
* For both substitution constructs, if the resulting string starts with
a '[',
Splunk interprets the string as a Splunk LISPY expression. For
example,
'INDEXED_VALUE=[OR <VALUE> source::*<VALUE>]' would turn
'myfield=myvalue'
into applying the LISPY expression '[OR myvalue source::*myvalue]'
341
(meaning
it matches either 'myvalue' or 'source::*myvalue' terms).
* Defaults to true.
* NOTE: You only need to set indexed_value if indexed = false.
fields.conf.example
# Version 7.2.1
#
# This file contains an example fields.conf. Use this file to configure
# dynamic field extractions.
#
# To use one or more of these configurations, copy the configuration
block into
# fields.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# These tokenizers result in the values of To, From and Cc treated as a
list,
# where each list element is an email address found in the raw string of
data.
[To]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
[From]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
[Cc]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
health.conf
The following are the spec and example files for health.conf.
342
health.conf.spec
# Version 7.2.1
#
# This file sets the default thresholds for Splunk Enterprise's built
# in Health Report.
#
# Feature stanzas contain indicators, and each indicator has two
thresholds:
# * Yellow: Indicates something is wrong and should be investigated.
# * Red: Means that the indicator is effectively not working.
#
# There is a health.conf in the $SPLUNK_HOME/etc/system/default/
directory.
# Never change or copy the configuration files in the default
directory.
# The files in the default directory must remain intact and in their
original
# location.
#
# To set custom configurations, create a new file with the name
health.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific
settings
# that you want to customize to the local configuration file.
#
# To learn more about configuration files (including precedence), see
the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[health_reporter]
full_health_log_interval = <number>
* The amount of time, in seconds, that elapses between each
?PeriodicHealthReporter=INFO? log entry.
* Default: 30.
suppress_status_update_ms = <number>
* The minimum amount of time, in milliseconds, that must elapse between
an
indicator's health status changes.
* Changes that occur earlier will be suppressed.
* Default: 300.
alert.disabled = [0|1]
* A value of 1 disables the alerting feature for health reporter.
343
* If the value is set to 1, alerting for all features is disabled.
* Default: 0 (enabled)
alert.actions = <string>
* The alert actions that will run when an alert is fired.
alert.min_duration_sec = <integer>
* The minimum amount of time, in seconds, that the health status color
must
persist within threshold_color before triggering an alert.
* Default: 60.
alert.threshold_color = [yellow|red]
* The health status color that will trigger an alert.
* Default: red.
alert.suppress_period = <integer>[m|s|h|d]
* The minimum amount of time, in [minutes|seconds|hours|days], that
must
elapse between each fired alert.
* Alerts that occur earlier will be sent as a batch after this time
period
elapses.
* Default: 10 minutes.
[clustering]
health_report_period = <number>
* The amount of time, in seconds, that elapses between each Clustering
health report run.
* Default: 20.
disabled = [0|1]
* A value of 1 disables the clustering feature health check.
* Default: 0 (enabled)
[feature:*]
suppress_status_update_ms = <number>
* The minimum amount of time, in milliseconds, that must elapse between
an indicator's
health status changes.
* Changes that occur earlier will be suppressed.
* Default: 300.
display_name = <string>
* A human readable name for the feature.
alert.disabled = [0|1]
344
* A value of 1 disables alerting for this feature.
* If alerting is disabled in the [health_reporter] stanza, alerting for
this feature is disabled,
regardless of the value set here.
* Otherwise, if the value is set to 1, alerting for all indicators is
disabled.
* Default: 0 (enabled)
alert.min_duration_sec = <integer>
* The minimum amount of time, in seconds, that the health status color
must
persist within threshold_color before triggering an alert.
alert.threshold_color = [yellow|red]
* The health status color to trigger an alert.
* Default: red.
[alert_action:*]
disabled = [0|1]
* A value of 1 disables this alert action.
345
* Default: 0 (enabled)
health.conf.example
# Version 7.2.1
#
# This file contains an example health.conf. Use this file to configure
thresholds
# for Splunk Enterprise's built in Health Report.
#
# To use one or more of these configurations, copy the configuration
block
# into health.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
[health_reporter]
# Every 30 seconds a new ?PeriodicHealthReporter=INFO? log entry will
be created.
full_health_log_interval = 30
# If an indicator?s health status changes before 600 milliseconds
elapses,
# the status change will be suppressed.
suppress_status_update_ms = 600
# Alerting for all features is enabled.
# You can disable alerting for each feature by setting 'alert.disabled'
to 1.
alert.disabled = 0
# If you don't want to send alerts too frequently, you can define a
minimum
# time period that must elapse before another alert is fired. Alerts
triggered
# during the suppression period are sent after the period expires as a
batch.
# The suppress_period value can be in seconds, minutes, hours, and days,
and
# uses the format: 60s, 60m, 60h and 60d.
# Default is 10 minutes.
alert.suppress_period = 30m
[alert_action:email]
# Enable email alerts for the health report.
# Before you can send an email alert, you must configure the email
notification
346
# settings on the email settings page.
# In the 'Search and Reporting' app home page, click Settings > Server
settings
# > Email settings, and specify values for the settings.
# After you configure email settings, click Settings > Alert actions.
# Make sure that the 'Send email' option is enabled.
disabled = 0
[alert_action:pagerduty]
# Enable Pager Duty alerts for the health report.
# Before you can send an alert to PagerDuty, you must configure some
settings
# on both the PagerDuty side and the Splunk Enterprise side.
# In PagerDuty, you must add a service to save your new integration.
# From the Integrations tab of the created service, copy the Integration
Key
# string to the 'action.integration_url_override' below.
# On the Splunk side, you must install the PagerDuty Incidents app from
# Splunkbase.
# After you install the app, in Splunk Web, click Settings > Alert
actions.
# Make sure that the PagerDuty app is enabled.
disabled = 0
action.integration_url_override = 123456789012345678901234567890ab
[clustering]
# Clustering health report will run in every 20 seconds.
health_report_period = 20
# Enable the clustering feature health check.
disabled = 0
[feature:s2s_autolb]
# If more than 20% of forwarding destinations have failed, health status
changes to yellow.
indicator:s2s_connections:yellow = 20
# If more than 70% of forwarding destinations have failed, health status
changes to red.
indicator:s2s_connections:red = 70
# Alerting for all indicators is disabled.
alert.disabled = 1
[feature:batchreader]
347
# Enable alerts for feature:batchreader. If there is no
'alert.disabled' value
# specified in a feature stanza, then the alert is enabled for the
feature by
# default.
# You can also enable/disable alerts at the indicator level, using the
setting:
# 'alert:<indicator name>.disabled'.
alert.disabled = 0
# You can define the duration that an unhealthy status persists before
the alert fires.
# Default value is 60 seconds.
# You can also define the min_duration_sec for each indicator using the
setting:
# 'alert:<indicator name>.min_duration_sec'.
# Indicator level setting overrides feature level min_duration_sec
setting.
alert.min_duration_sec = 30
indexes.conf
The following are the spec and example files for indexes.conf.
indexes.conf.spec
# Version 7.2.1
#
# This file contains all possible options for an indexes.conf file.
Use
# this file to configure Splunk's indexes and their properties.
#
# There is an indexes.conf in $SPLUNK_HOME/etc/system/default/. To set
# custom configurations, place an indexes.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
348
indexes.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# CAUTION: You can drastically affect your Splunk installation by
changing
# these settings. Consult technical support
# (http://www.splunk.com/page/submit_issue) if you are not sure how to
# configure this file.
#
GLOBAL SETTINGS
bucketMerging = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Set to true to enable bucket merging service on all indexes
349
* You can override this value per index
* Defaults to false
350
time.
* Must be greater than 0; maximum value is 1048576 (which corresponds to
1 TB)
* Setting this too high can lead to splunkd memory usage going up
substantially.
* Setting this too low can degrade splunkd indexing performance.
* Setting this to "auto" or an invalid value will cause Splunk to
autotune
this parameter.
* Defaults to "auto".
* The values derived when "auto" is seen are as follows:
* System Memory Available less than ... | memPoolMB
1 GB | 64 MB
2 GB | 128 MB
8 GB | 128 MB
8 GB or higher | 512 MB
* Only set this value if you are an expert user or have been advised to
by
Splunk Support.
* CARELESSNESS IN SETTING THIS MAY LEAD TO PERMANENT BRAIN DAMAGE OR
LOSS OF JOB.
rtRouterThreads = 0|1
* Set this to 1 if you expect to use non-indexed real time searches
regularly. Index
throughput drops rapidly if there are a handful of these running
concurrently on the system.
* If you are not sure what "indexed vs non-indexed" real time searches
are, see
README of indexed_realtime* settings in limits.conf
* NOTE: This is not a boolean value, only 0 or 1 is accepted. In the
future, we
may allow more than a single thread, but current implementation
351
only allows one to create a single thread per pipeline set
assureUTF8 = true|false
* Verifies that all data retrieved from the index is proper by
validating
all the byte strings.
* This does not ensure all data will be emitted, but can be a
workaround
if an index is corrupted in such a way that the text inside it is
no
longer valid utf8.
* Will degrade indexing performance when enabled (set to true).
* Can only be set globally, by specifying in the [default] stanza.
* Defaults to false.
enableRealtimeSearch = true|false
* Enables real-time searches.
* Defaults to true.
352
* Highest legal value is 4294967295
* Defaults to 8 (note: up until 5.0 it defaulted to 20)
inPlaceUpdates = true|false
* If true, metadata updates are written to the .data files directly
* If false, metadata updates are written to a temporary file and then
moved
into place
* Intended for advanced debugging of metadata issues
353
* Setting this parameter to false (to use a temporary file) will impact
indexing performance, particularly with large numbers of hosts,
sources,
or sourcetypes (~1 million, across all indexes.)
* This is an advanced parameter; do NOT set unless instructed by Splunk
Support
* Defaults to true
serviceOnlyAsNeeded = true|false
* DEPRECATED; use 'serviceInactiveIndexesPeriod'.
* Causes index service (housekeeping tasks) overhead to be incurred only
after index activity.
* Indexer module problems may be easier to diagnose when this
optimization
is disabled (set to false).
* Defaults to true.
354
tsidxStatsHomePath = <path on server>
* An absolute path that specifies where Splunk creates namespace data
with
'tscollect' command
* If the directory does not exist, we attempt to create it.
* Optional. If this is unspecified, we default to the 'tsidxstats'
directory
under $SPLUNK_DB
* CAUTION: Path "$SPLUNK_DB" must be writable.
disabled = true|false
* Toggles your index entry off and on.
* Set to true to disable an index.
* Defaults to false.
deleted = true
* If present, means that this index has been marked for deletion: if
355
splunkd
is running, deletion is in progress; if splunkd is stopped, deletion
will
re-commence on startup.
* Normally absent, hence no default.
* Do NOT manually set, clear, or modify value of this parameter.
* Seriously: LEAVE THIS PARAMETER ALONE.
356
scenarios.
* Generally speaking, volumes provide a more appropriate way to
control the
storage location for indexes in a general way.
357
* Must restart splunkd after changing this parameter; index reload will
not
suffice.
* We strongly recommend that you avoid the use of environment variables
in
index paths, aside from the possible exception of SPLUNK_DB. See
homePath
for the complete rationale.
* CAUTION: Do not set this parameter on indexes that have been
configured to use remote storage with the "remotePath" parameter.
createBloomfilter = true|false
* Controls whether to create bloomfilter files for the index.
* TRUE: bloomfilter files will be created. FALSE: not created.
* Defaults to true.
* CAUTION: Do not set this parameter to "false" on indexes that have
been
configured to use remote storage with the "remotePath" parameter.
358
* Defaults to unset.
359
* Highest legal value in computed seconds is 2 billion, or 2000000000,
which
is approximately 68 years.
* Defaults to 30d.
enableOnlineBucketRepair = true|false
* Controls asynchronous "online fsck" bucket repair, which runs
concurrently
with Splunk
* When enabled, you do not have to wait until buckets are repaired, to
start
Splunk
* When enabled, you might observe a slight performance degradation
* Defaults to true.
enableDataIntegrityControl = true|false
* If set to true, hashes are computed on the rawdata slices and stored
for
future data integrity checks
* If set to false, no hashes are computed on the rawdata slices
* It has a global default value of false
360
grows beyond 'maxTotalDataSizeMB' megabytes before
'frozenTimePeriodInSecs' seconds have passed, data could prematurely
roll to frozen. As the default policy for rolling data to frozen is
deletion, unintended data loss could occur.
* Highest legal value is 4294967295
* Defaults to 500000.
361
frozen.
* IMPORTANT: Every event in the DB must be older than
frozenTimePeriodInSecs
before it will roll. Then, the DB will be frozen the next time splunkd
checks (based on rotatePeriodInSecs attribute).
* Highest legal value is 4294967295
* Defaults to 188697600 (6 years).
362
the
absolute path to the directory.
* If $DIR is not present, the directory will be added to the end of the
invocation line of the script.
* This is important for Windows.
* For historical reasons, the entire string is broken up by
shell-pattern expansion rules.
* Since windows paths frequently include spaces, and the windows
shell
breaks on space, the quotes are needed for the script to
understand
the directory.
* If your script can be run directly on your platform, you can specify
just
the script.
* Examples of this are:
* .bat and .cmd files on Windows
* scripts set executable on UNIX with a #! shebang line pointing to
a
valid interpreter.
* You can also specify an explicit path to an interpreter and the
script.
* Example: /path/to/my/installation/of/python.exe
path/to/my/script.py
* Splunk ships with an example archiving script in that you SHOULD NOT
USE
$SPLUNK_HOME/bin called coldToFrozenExample.py
* DO NOT USE the example for production use, because:
* 1 - It will be overwritten on upgrade.
* 2 - You should be implementing whatever requirements you need in a
script of your creation. If you have no such requirements,
use
coldToFrozenDir
* Example configuration:
* If you create a script in bin/ called our_archival_script.py, you
could use:
UNIX:
coldToFrozenScript = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/bin/our_archival_script.py"
Windows:
coldToFrozenScript = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/bin/our_archival_script.py" "$DIR"
* The example script handles data created by different versions of
splunk
differently. Specifically data from before 4.2 and after are handled
differently. See "Freezing and Thawing" below:
* The script must be in $SPLUNK_HOME/bin or a subdirectory thereof.
363
* Splunk will automatically put frozen buckets in this directory
* For information on how buckets created by different versions are
handled, see "Freezing and Thawing" below.
* If both coldToFrozenDir and coldToFrozenScript are specified,
coldToFrozenDir will take precedence
* Must restart splunkd after changing this parameter; index reload will
not
suffice.
* May NOT contain a volume reference.
compressRawdata = true|false
* This parameter is ignored. The splunkd process always compresses raw
data.
364
typically
be considered one that gets over 10GB of data per day.
* Defaults to "auto", which sets the size to 750MB.
* "auto_high_volume" sets the size to 10GB on 64-bit, and 1GB on 32-bit
systems.
* Although the maximum value you can set this is 1048576 MB, which
corresponds to 1 TB, a reasonable number ranges anywhere from 100 to
50000. Before proceeding with any higher value, please seek approval
of
Splunk Support.
* If you specify an invalid number or string, maxDataSize will be auto
tuned.
* NOTE: The maximum size of your warm buckets may slightly exceed
'maxDataSize', due to post-processing and timing issues with the
rolling
policy.
365
are
removed as soon as compression is complete
* Some filesystems are very inefficient at performing sync operations,
so
only enable this if you are sure it is needed
* Must restart splunkd after changing this parameter; index reload will
not
suffice.
* No exponent may follow the decimal.
* Highest legal value is 18446744073709551615
* Defaults to "disable".
366
maxHotIdleSecs = <nonnegative integer>
* Provides a ceiling for buckets to stay in hot status without receiving
any
data.
* If a hot bucket receives no data for more than maxHotIdleSecs
seconds,
Splunk rolls it to warm.
* This setting operates independently of maxHotBuckets, which can also
cause
hot buckets to roll.
* A value of 0 turns off the idle check (equivalent to infinite idle
time).
* Highest legal value is 4294967295
* Defaults to 0.
367
number of
seconds, a new bucket will be created to receive these new events and
the
idle bucket will be rolled to warm.
* If no hot bucket has been idle for minHotIdleSecsBeforeForceRoll
number of seconds,
or if minHotIdleSecsBeforeForceRoll has been set to zero, then a best
fit bucket
will be chosen for these new events from the existing set of hot
buckets.
* This setting operates independently of maxHotIdleSecs, which causes
hot buckets
to roll after they have been idle for maxHotIdleSecs number of
seconds,
*regardless* of whether new events can fit into the existing hot
buckets or not
due to an event timestamp. minHotIdleSecsBeforeForceRoll, on the
other hand,
controls a hot bucket roll *only* under the circumstances when the
timestamp
of a new event cannot fit into the existing hot buckets given the
other
parameter constraints on the system (parameters such as maxHotBuckets,
maxHotSpanSecs and quarantinePastSecs).
* auto: Specifying "auto" will cause Splunk to autotune this parameter
(recommended). The value begins at 600 seconds but automatically
adjusts upwards for
optimal performance. Specifically, the value will increase when a hot
bucket rolls
due to idle time with a significantly smaller size than maxDataSize.
As a consequence,
the outcome may be fewer buckets, though these buckets may span wider
earliest-latest
time ranges of events.
* 0: A value of 0 turns off the idle check (equivalent to infinite idle
time).
Setting this to zero means that we will never roll a hot bucket for
the
reason that an event cannot fit into an existing hot bucket due to the
constraints of other parameters. Instead, we will find a best fitting
bucket to accommodate that event.
* Highest legal value is 4294967295.
* NOTE: If you set this configuration, there is a chance that this
could lead to
frequent hot bucket rolls depending on the value. If your index
contains a
large number of buckets whose size-on-disk falls considerably short of
the
size specified in maxDataSize, and if the reason for the roll of these
buckets
is due to "caller=lru", then setting the parameter value to a larger
value or
368
to zero may reduce the frequency of hot bucket rolls (see AUTO
above). You may check
splunkd.log for a similar message below for rolls due to this setting.
INFO HotBucketRoller - finished moving hot to warm
bid=_internal~0~97597E05-7156-43E5-85B1-B0751462D16B idx=_internal
from=hot_v1_0 to=db_1462477093_1462477093_0 size=40960 caller=lru
maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots
* Defaults to "auto".
369
* NOTE: since at least 5.0.x, large strings.data from punct will be
rare.
* There is a delta between when maximum is exceeded and bucket is
rolled.
* This means a bucket may end up with epsilon more lines than specified,
but
this is not a major concern unless excess is significant
* If set to 0, this setting is ignored (it is treated as infinite)
* Highest legal value is 4294967295
syncMeta = true|false
* When "true", a sync operation is called before file descriptor is
closed
on metadata file updates.
* This functionality was introduced to improve integrity of metadata
files,
especially in regards to operating system crashes/machine failures.
* NOTE: Do not change this parameter without the input of a Splunk
support
professional.
* Must restart splunkd after changing this parameter; index reload will
not
suffice.
* Defaults to true.
370
* Highest legal value is 4294967295
isReadOnly = true|false
* Set to true to make an index read-only.
* If true, no new events can be added to the index, but the index is
still
searchable.
* Must restart splunkd after changing this parameter; index reload will
371
not
suffice.
* Defaults to false.
disableGlobalMetadata = true|false
* NOTE: This option was introduced in 4.3.3, but as of 5.0 it is
obsolete
and ignored if set.
* It used to disable writing to the global metadata. In 5.0 global
metadata
was removed.
repFactor = 0|auto
* Valid only for indexer cluster peer nodes.
* Determines whether an index gets replicated.
* Value of 0 turns off replication for this index.
* Value of "auto" turns on replication for this index.
* This attribute must be set to the same value on all peer nodes.
* Defaults to 0.
372
* Minimum size of the queue that stores events in memory before
committing
them to a tsidx file. As Splunk operates, it continually adjusts this
size internally. Splunk could decide to use a small queue size and
thus
generate tiny tsidx files under certain unusual circumstances, such as
file system errors. The danger of a very low minimum is that it can
generate very tiny tsidx files with one or very few events, making it
impossible for splunk-optimize to catch up and optimize the tsidx
files
into reasonably sized files.
* Defaults to 2000.
* Only set this value if you have been advised to by Splunk Support.
* Highest legal value is 4294967295
journalCompression = gzip|lz4|zstd
* Select compression algorithm for rawdata journal file of new buckets
* This does not have any effect on already created butckets -- there is
no problem searching buckets compressed with different algorithms
* zstd is only supported in Splunk 7.2.x and later -- do not enable
that
compression format if you have an indexer cluster where some indexers
are running an older version of splunk.
* Defaults to gzip
enableTsidxReduction = true|false
* By enabling this setting, you turn on the tsidx reduction capability.
This causes the
indexer to reduce the tsidx files of buckets, when the buckets reach
the age specified
by timePeriodInSecBeforeTsidxReduction.
* CAUTION: Do not set this parameter to "true" on indexes that have been
configured to use remote storage with the "remotePath" parameter.
* Defaults to false.
tsidxWritingLevel = 1 or 2
* Defaults to 1
* Enables various performance and space-saving improvements for tsidx
files
* Set this to 2 if this node is NOT part of a multi-site index cluster
OR if you have a multi-site cluster and all your indexer nodes are
7.2.0
or higher
suspendHotRollByDeleteQuery = true|false
* When the "delete" search command is run, all buckets containing data
373
to be deleted are
marked for updating of their metadata files. The indexer normally
first rolls any hot buckets,
as rolling must precede the metadata file updates.
* When suspendHotRollByDeleteQuery is set to true, the rolling of hot
buckets for the "delete"
command is suspended. The hot buckets, although marked, do not roll
immediately, but instead
wait to roll in response to the same circumstances operative for any
other hot buckets; for
example, due to reaching a limit set by maxHotBuckets, maxDataSize,
etc. When these hot buckets
finally roll, their metadata files are then updated.
* Defaults to false
374
PER PROVIDER OPTIONS
vix.mode = stream|report
* Usually specified at the family level.
* Typically should be "stream". In general, do not use "report" without
consulting Splunk Support.
vix.command = <command>
* The command to be used to launch an external process for searches on
this
provider.
* Usually specified at the family level.
vix.command.arg.<N> = <argument>
* The Nth argument to the command specified by vix.command.
* Usually specified at the family level, but frequently overridden at
the
provider level, for example to change the jars used depending on the
version of Hadoop to which a provider connects.
375
#**************************************************************************
# PER PROVIDER OPTIONS -- HADOOP
# These options are specific to ERPs with the Hadoop family.
# NOTE: Many of these properties specify behavior if the property is not
# set. However, default values set in system/default/indexes.conf
# take precedence over the "unset" behavior.
#**************************************************************************
vix.splunk.setup.onsearch = true|false
* Whether to perform setup (install & bundle replication) on search.
* Defaults to false.
vix.splunk.search.debug = true|false
* Whether to run searches against this index in debug mode. In debug
mode,
additional information is logged to search.log.
376
* Optional. Defaults to false.
377
vix.splunk.search.mixedmode = true|false
* Whether mixed mode execution is enabled.
* Defaults to true.
vix.splunk.impersonation = true|false
* Enable/disable user impersonation.
378
* A postive number, representing a time duration in milliseconds.
* Defaults to 20,000 (i.e. 20 seconds).
* A task will wait this long for a bundle to be installed before it
quits.
vix.splunk.setup.package.replication = true|false
* Set custom replication factor for the Splunk package on HDFS. This is
the
package set in the property vix.splunk.setup.package.
* Must be an integer between 1 and 32767.
* Increasing this setting may help performance on large clusters by
decreasing
the average access time for the package across Task Nodes.
* Optional. If not set, the default replication factor for the
file-system
will apply.
vix.splunk.search.column.filter = true|false
* Enables/disables column filtering. When enabled, Hunk will trim
379
columns that
are not necessary to a query on the Task Node, before returning the
results
to the search process.
* Should normally increase performance, but does have its own small
overhead.
* Works with these formats: CSV, Avro, Parquet, Hive.
* If not set, defaults to true.
#
# Kerberos properties
#
#
# The following properties affect the SplunkMR heartbeat mechanism. If
this
# mechanism is turned on, the SplunkMR instance on the Search Head
updates a
# heartbeat file on HDFS. Any MR job spawned by report or mix-mode
searches
# checks the heartbeat file. If it is not updated for a certain time, it
will
# consider SplunkMR to be dead and kill itself.
#
vix.splunk.heartbeat = true|false
* Turn on/off heartbeat update on search head, and checking on MR side.
* If not set, defaults to true.
380
vix.splunk.heartbeat.threshold = <postive integer>
* The number of times the MR job will detect a missing heartbeat update
before
it considers SplunkMR dead and kills itself.
* Default value is 10.
#
# Sequence file
#
vix.splunk.search.recordreader.sequence.ignore.key = true|false
* When reading sequence files, if this key is enabled, events will be
expected
to only include a value. Otherwise, the expected representation is
key+"\t"+value.
* Defaults to true.
#
# Avro
#
vix.splunk.search.recordreader.avro.regex = <regex>
* Regex that files must match in order to be considered avro files.
* Optional. Defaults to \.avro$
#
# Parquet
#
vix.splunk.search.splitter.parquet.simplifyresult = true|false
* If enabled, field names for map and list type fields will be
simplified by
dropping intermediate "map" or "element" subfield names. Otherwise, a
field
name will match parquet schema completely.
* May be specified in either the provider stanza or in the virutal
index stanza.
* Defaults to true.
#
# Hive
#
vix.splunk.search.splitter.hive.ppd = true|false
* Enable or disable Hive ORC Predicate Push Down.
* If enabled, ORC PPD will be applied whenever possible to prune
unnecessary
data as early as possible to optimize the search.
* If not set, defaults to true.
* May be specified in either the provider stanza or in the virutal
381
index stanza.
vix.splunk.search.splitter.hive.fileformat =
textfile|sequencefile|rcfile|orc
* Format of the Hive data files in this provider.
* If not set, defaults to "textfile".
* May be specified in either the provider stanza or in the virutal
index stanza.
vix.splunk.search.splitter.hive.fileformat.inputformat = <InputFormat
class>
* Fully-qualified class name of an InputFormat to be used with Hive
382
table data.
* May be specified in either the provider stanza or in the virutal
index stanza.
vix.splunk.search.splitter.hive.rowformat.fields.terminated =
<delimiter>
* Will be set as the Hive SerDe property "field.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal
index stanza.
vix.splunk.search.splitter.hive.rowformat.lines.terminated =
<delimiter>
* Will be set as the Hive SerDe property "line.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal
index stanza.
vix.splunk.search.splitter.hive.rowformat.mapkeys.terminated =
<delimiter>
* Will be set as the Hive SerDe property "mapkey.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal
index stanza.
vix.splunk.search.splitter.hive.rowformat.collectionitems.terminated =
<delimiter>
* Will be set as the Hive SerDe property "colelction.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal
index stanza.
#
# Archiving
#
383
PER VIRTUAL INDEX OPTIONS
# These options affect virtual indexes. Like indexes, these options may
# be set under an [<virtual-index>] entry.
#
# Virtual index names have the same constraints as normal index names.
#
# Each virtual index must reference a provider. I.e:
# [virtual_index_name]
# vix.provider = <provider_name>
#
# All configuration keys starting with "vix." will be passed to the
# external resource provider (ERP).
#**************************************************************************
vix.provider = <provider_name>
* Name of the external resource provider to use for this virtual index.
#**************************************************************************
# PER VIRTUAL INDEX OPTIONS -- HADOOP
# These options are specific to ERPs with the Hadoop family.
#**************************************************************************
#
# The vix.input.* configurations are grouped by an id.
# Inputs configured via the UI always use '1' as the id.
# In this spec we'll use 'x' as the id.
#
vix.input.x.path = <path>
* Path in a hadoop filesystem (usually HDFS or S3).
* May contain wildcards.
* Checks the path for data recursively when ending with '...'
* Can extract fields with ${field}. I.e: "/data/${server}/...", where
server
will be extracted.
* May start with a schema.
* The schema of the path specifies which hadoop filesystem
implementation to
use. Examples:
* hdfs://foo:1234/path, will use a HDFS filesystem implementation
* s3a://s3-bucket/path, will use a S3 filesystem implementation
vix.input.x.accept = <regex>
* Specifies a whitelist regex.
* Only files within the location given by matching vix.input.x.path,
whose
paths match this regex, will be searched.
384
vix.input.x.ignore = <regex>
* Specifies a blacklist regex.
* Searches will ignore paths matching this regex.
* These matches take precedence over vix.input.x.accept matches.
vix.input.x.et.offset = <seconds>
* Offset in seconds to add to the extracted earliest time.
vix.input.x.lt.regex = <regex>
* Latest time equivalent of vix.input.x.et.regex
vix.input.x.lt.offset = <seconds>
* Latest time equivalent of vix.input.x.et.offset
#
# Archiving
#
385
vix.output.buckets.path = <hadoop path>
* Path to a hadoop filesystem where buckets will be archived
vix.output.buckets.older.than = <seconds>
* Buckets must be this old before they will be archived.
* A bucket's age is determined by the the earliest _time field of any
event in
the bucket.
vix.unified.search.cutoff_sec = <seconds>
* Window length before present time that configures where events are
retrieved
for unified search
* Events from now to now-cutoff_sec will be retrieved from the splunk
index
and events older than cutoff_sec will be retrieved from the archive
index
#**************************************************************************
# PER VIRTUAL INDEX OR PROVIDER OPTIONS -- HADOOP
# These options can be set at either the virtual index level or provider
# level, for the Hadoop ERP.
#
# Options set at the virtual index level take precedence over options
set
# at the provider level.
#
# Virtual index level prefix:
# vix.input.<input_id>.<option_suffix>
#
# Provider level prefix:
# vix.splunk.search.<option_suffix>
#**************************************************************************
#
# Record reader options
#
recordreader.<name>.<conf_key> = <conf_value>
* Sets a configuration key for a RecordReader with <name> to
<conf_value>
recordreader.<name>.regex = <regex>
* Regex specifying which files this RecordReader can be used for.
386
recordreader.journal.buffer.size = <bytes>
* Buffer size used by the journal record reader
recordreader.csv.dialect = default|excel|excel-tab|tsv
* Set the csv dialect for csv files
* A csv dialect differs on delimiter_char, quote_char and escape_char.
* Here is a list of how the different dialects are defined in order
delim,
quote, and escape:
* default = , " \
* excel = , " "
* excel-tab = \t " "
* tsv = \t " \
#
# Splitter options
#
splitter.<name>.<conf_key> = <conf_value>
* Sets a configuration key for a split generator with <name> to
<conf_value>
* See comment above under "PER VIRTUAL INDEX OR PROVIDER OPTIONS". This
means that the full format is:
vix.input.N.splitter.<name>.<conf_key> (in a vix stanza)
vix.splunk.search.splitter.<name>.<conf_key> (in a provider stanza)
splitter.file.split.minsize = <bytes>
* Minimum size in bytes for file splits.
* Defaults to 1.
splitter.file.split.maxsize = <bytes>
* Maximum size in bytes for file splits.
* Defaults to Long.MAX_VALUE.
#**************************************************************************
# Dynamic Data Self Storage settings. This section describes settings
that affect the archiver-
# optional and archiver-mandatory parameters only.
#
# As the first step in the Dynamic Data Self Storage feature, it allows
users to move
# their data from Splunk indexes to customer-owned external storage in
AWS S3
# when the data reaches the end of the retention period. Note that only
the
# raw data and delete marker files are transferred to the external
storage.
# Future development may include the support for storage hierarchies and
the
# automation of data rehydration.
387
#
# For example, use the following settings to configure Dynamic Data Self
Storage.
# archiver.selfStorageProvider = S3
# archiver.selfStorageBucket = mybucket
# archiver.selfStorageBucketFolder = folderXYZ
#**************************************************************************
archiver.selfStorageProvider = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the storage provider for Self Storage.
* Optional. Only required when using Self Storage.
* The only supported provider is S3. More providers will be added in the
future
for other cloud vendors and other storage options.
archiver.selfStorageBucket = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the destination bucket for Self Storage.
* Optional. Only required when using Self Storage.
archiver.selfStorageBucketFolder = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the folder on the destination bucket for Self Storage.
* Optional. If not specified, data is uploaded to the root path in the
destination bucket.
#**************************************************************************
# Dynamic Data Archive allows you to move your data from your Splunk
Cloud indexes to a
# storage location. You can configure Splunk Cloud to automatically move
the data
# in an index when the data reaches the end of the Splunk Cloud
retention period
# you configure. In addition, you can restore your data to Splunk Cloud
if you need
# to perform some analysis on the data.
# For each index, you can use Dynamic Data Self Storage or Dynamic Data
Archive, but not both.
#
# For example, use the following settings to configure Dynamic Data
Archive.
# archiver.coldStorageProvider = Glacier
# archiver.coldStorageRetentionPeriod = 365
#**************************************************************************
archiver.coldStorageProvider = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the storage provider for Dynamic Data Archive.
* Optional. Only required when using Dynamic Data Archive.
388
* The only supported provider is Glacier. More providers will be added
in the future
for other cloud vendors and other storage options.
archiver.enableDataArchive = true|false
* Currently not supported. This setting is related to a feature that is
still under development.
* If set to true, Dynamic Data Archiver is enabled for the index.
* Defaults to false.
#**************************************************************************
# Volume settings. This section describes settings that affect the
volume-
# optional and volume-mandatory parameters only.
#
# All volume stanzas begin with "volume:". For example:
# [volume:volume_name]
# path = /foo/bar
#
# These volume stanzas can then be referenced by individual index
# parameters, e.g. homePath or coldPath. To refer to a volume stanza,
use
# the "volume:" prefix. For example, to set a cold DB to the example
stanza
# above, in index "hiro", use:
# [hiro]
# coldPath = volume:volume_name/baz
# This will cause the cold DB files to be placed under /foo/bar/baz.
If the
# volume spec is not followed by a path
# (e.g. "coldPath=volume:volume_name"), then the cold path would be
389
# composed by appending the index name to the volume name
("/foo/bar/hiro").
#
# If "path" is specified with a URI-like value (e.g.,
"s3://bucket/path"),
# this is a remote storage volume. A remote storage volume can only be
# referenced by a remotePath parameter, as described above. An Amazon
S3
# remote path might look like "s3://bucket/path", whereas an NFS remote
path
# might look like "file:///mnt/nfs". The name of the scheme ("s3" or
"file"
# from these examples) is important, because it can indicate some
necessary
# configuration specific to the type of remote storage. To specify a
# configuration under the remote storage volume stanza, you use
parameters
# with the pattern "remote.<scheme>.<param name>". These parameters
vary
# according to the type of remote storage. For example, remote storage
of
# type S3 might require that you specify an access key and a secret
key.
# You would do this through the "remote.s3.access_key" and
# "remote.s3.secret_key" parameters.
#
# Note: thawedPath may not be defined in terms of a volume.
# Thawed allocations are manually controlled by Splunk administrators,
# typically in recovery or archival/review scenarios, and should not
# trigger changes in space automatically used by normal index activity.
#**************************************************************************
390
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific
string for
identifying a location inside the storage system.
datatype = <event|metric>
* Optional, defaults to 'event'.
* Determines whether the index stores log events or metric data.
* If set to 'metric', we optimize the index to store metric data which
can be
queried later only using the mstats operator as searching metric data
is
different from traditional log events.
* Use 'metric' data type only for metric sourcetypes like statsd.
remote.* = <String>
* Optional.
* With remote volumes, communication between the indexer and the
external
storage system may require additional configuration, specific to the
type of
storage system. You can pass configuration information to the storage
system by specifying the settings through the following schema:
remote.<scheme>.<config-variable> = <value>.
For example: remote.s3.access_key = ACCESS_KEY
391
################################################################
##### S3 specific settings
################################################################
remote.s3.header.<http-method-name>.<header-field-name> = <String>
* Optional.
* Enable server-specific features, such as reduced redundancy,
encryption, and so on,
by passing extra HTTP headers with the REST requests.
The <http-method-name> can be any valid HTTP method. For example,
GET, PUT, or ALL,
for setting the header field for all HTTP methods.
* Example: remote.s3.header.PUT.x-amz-storage-class =
REDUCED_REDUNDANCY
remote.s3.access_key = <String>
* Optional.
* Specifies the access key to use when authenticating with the remote
storage
system supporting the S3 API.
* If not specified, the indexer will look for these environment
variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order).
* If the environment variables are not set and the indexer is running on
EC2,
the indexer attempts to use the access key from the IAM role.
* Default: unset
remote.s3.secret_key = <String>
* Optional.
* Specifies the secret key to use when authenticating with the remote
storage
system supporting the S3 API.
* If not specified, the indexer will look for these environment
variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order).
* If the environment variables are not set and the indexer is running on
EC2,
the indexer attempts to use the secret key from the IAM role.
* Default: unset
remote.s3.list_objects_version = v1|v2
* The AWS S3 Get Bucket (List Objects) Version to use.
* See AWS S3 documentation "GET Bucket (List Objects) Version 2" for
details.
* Default: v1
remote.s3.signature_version = v2|v4
* Optional.
* The signature version to use when authenticating with the remote
storage
system supporting the S3 API.
392
* If not specified, it defaults to v4.
* For 'sse-kms' server-side encryption scheme, you must use
signature_version=v4.
remote.s3.auth_region = <String>
* Optional
* The authentication region to use for signing requests when interacting
with the remote
storage system supporting the S3 API.
* Used with v4 signatures only.
* If unset and the endpoint (either automatically constructed or
explicitly set with
remote.s3.endpoint setting) uses an AWS URL (for example,
https://s3-us-west-1.amazonaws.com),
the instance attempts to extract the value from the endpoint URL (for
example, "us-west-1"). See the description for the
remote.s3.endpoint setting.
* If unset and an authentication region cannot be determined, the
request will be signed
with an empty region value.
* Defaults: unset
remote.s3.endpoint = <URL>
* Optional.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL
connectivity
with the endpoint.
* If not specified and the indexer is running on EC2, the endpoint will
be
constructed automatically based on the EC2 region of the instance
where the
indexer is running, as follows: https://s3-<region>.amazonaws.com
* Example: https://s3-us-west-2.amazonaws.com
393
* Optional.
* Sets the download size of parts during a multipart download.
* This setting uses HTTP/1.1 Range Requests (RFC 7233) to improve
throughput
overall and for retransmission of failed transfers.
* A value of 0 disables downloading in multiple parts, i.e., files will
always
be downloaded as a single (large) part.
* Do not change this value unless that value has been proven to improve
throughput.
* Minimum value: 5242880 (5 MB)
* Defaults: 134217728 (128 MB)
remote.s3.enable_data_integrity_checks = <bool>
* If set to true, Splunk sets the data checksum in the metadata field of
the HTTP header
during upload operation to S3.
* The checksum is used to verify the integrity of the data on uploads.
* Default: false
remote.s3.enable_signed_payloads = <bool>
* If set to true, Splunk signs the payload during upload operation to
S3.
* Valid only for remote.s3.signature_version = v4
* Default: true
remote.s3.retry_policy = max_count
* Optional.
* Sets the retry policy to use for remote file operations.
* A retry policy specifies whether and how to retry file operations
that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a file operation will
be
retried upon intermittent failure both for individual parts of a
multipart
download or upload and for files as a whole.
* Defaults: max_count
394
remote.s3.max_count.max_retries_per_part = <unsigned int>
* Optional.
* When the remote.s3.retry_policy setting is max_count, sets the
maximum number
of times a file operation will be retried upon intermittent failure.
* The count is maintained separately for each file part in a multipart
download
or upload.
* Defaults: 9
remote.s3.sslVerifyServerCert = <bool>
* Optional
* If this is set to true, Splunk verifies certificate presented by S3
server and checks
that the common name/alternate name matches the ones specified in
'remote.s3.sslCommonNameToCheck' and 'remote.s3.sslAltNameToCheck'.
* Defaults: false
remote.s3.sslVersions = <versions_list>
* Optional
* Comma-separated list of SSL versions to connect to
'remote.s3.endpoint'.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version
"tls"
395
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Defaults: tls1.2
remote.s3.sslRootCAPath = <path>
* Optional
* Full path to the Certificate Authrity (CA) certificate PEM format
file
containing one or more certificates concatenated together. S3
certificate
will be validated against the CAs present in this file.
* Defaults: [sslConfig/caCertFile] in server.conf
396
* Defaults: unset
remote.s3.dhFile = <path>
* Optional
* PEM format Diffie-Hellman parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Defaults:unset.
remote.s3.encryption.sse-c.key_type = kms
* Optional
* Determines the mechanism Splunk uses to generate the key for sending
over to
S3 for SSE-C.
* The only valid value is 'kms', indicating AWS KMS service.
* One must specify required KMS settings: e.g. remote.s3.kms.key_id
for Splunk to start up while using SSE-C.
* Defaults: kms.
remote.s3.kms.key_id = <String>
* Required if remote.s3.encryption = sse-c | sse-kms
* Specifies the identifier for Customer Master Key (CMK) on KMS. It can
be the
unique key ID or the Amazon Resource Name (ARN) of the CMK or the
alias
name or ARN of an alias that refers to the CMK.
* Examples:
Unique key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
CMK ARN:
arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
397
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
* Defaults: unset
remote.s3.kms.access_key = <String>
* Optional.
* Similar to 'remote.s3.access_key'.
* If not specified, KMS access uses 'remote.s3.access_key'.
* Default: unset
remote.s3.kms.secret_key = <String>
* Optional.
* Similar to 'remote.s3.secret_key'.
* If not specified, KMS access uses 'remote.s3.secret_key'.
* Default: unset
remote.s3.kms.auth_region = <String>
* Required if 'remote.s3.auth_region' is unset and Splunk can not
automatically extract this information.
* Similar to 'remote.s3.auth_region'.
* If not specified, KMS access uses 'remote.s3.auth_region'.
* Defaults: unset
remote.s3.kms.<ssl_settings> = <...>
* Optional.
* Check the descriptions of the SSL settings for
remote.s3.<ssl_settings>
above. e.g. remote.s3.sslVerifyServerCert.
* Valid ssl_settings are sslVerifyServerCert, sslVersions,
sslRootCAPath, sslAltNameToCheck,
sslCommonNameToCheck, cipherSuite, ecdhCurves and dhFile.
* All of these are optional and fall back to same defaults as
remote.s3.<ssl_settings>.
indexes.conf.example
# Version 7.2.1
#
# This file contains an example indexes.conf. Use this file to
configure
# indexing properties.
#
# To use one or more of these configurations, copy the configuration
block
398
# into indexes.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
defaultDatabase = hatch
[hatch]
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
maxDataSize = 10000
maxHotBuckets = 10
[default]
maxTotalDataSizeMB = 650000
maxGlobalDataSizeMB = 0
# The following example changes the time data is kept around by default.
# It also sets an export script. NOTE: You must edit this script to
set
# export location before running it.
[default]
maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
rotatePeriodInSecs = 30
coldToFrozenScript = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/bin/myColdToFrozenScript.py"
# This example freezes buckets on the same schedule, but lets Splunk do
the
399
# freezing process as opposed to a script
[default]
maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
rotatePeriodInSecs = 30
coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive"
[volume:hot1]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000
[volume:cold1]
path = /mnt/big_disk
# maxVolumeDataSizeMB not specified: no data size limitation on top of
the
# existing ones
[volume:cold2]
path = /mnt/big_disk2
maxVolumeDataSizeMB = 1000000
# index definitions
[idx1]
homePath = volume:hot1/idx1
coldPath = volume:cold1/idx1
[idx2]
# note that the specific indexes must take care to avoid collisions
homePath = volume:hot1/idx2
coldPath = volume:cold2/idx2
thawedPath = $SPLUNK_DB/idx2/thaweddb
[idx3]
homePath = volume:hot1/idx3
coldPath = volume:cold2/idx3
thawedPath = $SPLUNK_DB/idx3/thaweddb
400
[volume:small_indexes]
path = /mnt/splunk_indexes
maxVolumeDataSizeMB = 100000
[rare_data]
homePath=volume:small_indexes/rare_data/db
coldPath=volume:small_indexes/rare_data/colddb
thawedPath=$SPLUNK_DB/rare_data/thaweddb
maxHotBuckets = 2
# main, and any other large volume indexes you add sharing
large_indexes
# will be together be constrained to 50TB, separately from the 100GB of
# the small_indexes
[main]
homePath=volume:large_indexes/main/db
coldPath=volume:large_indexes/main/colddb
thawedPath=$SPLUNK_DB/main/thaweddb
# large buckets and more hot buckets are desirable for higher volume
# indexes, and ones where the variations in the timestream of events is
# hard to predict.
maxDataSize = auto_high_volume
maxHotBuckets = 10
[idx1_large_vol]
homePath=volume:large_indexes/idx1_large_vol/db
coldPath=volume:large_indexes/idx1_large_vol/colddb
homePath=$SPLUNK_DB/idx1_large/thaweddb
# this index will exceed the default of .5TB requiring a change to
maxTotalDataSizeMB
maxTotalDataSizeMB = 750000
maxDataSize = auto_high_volume
maxHotBuckets = 10
# but the data will only be retained for about 30 days
frozenTimePeriodInSecs = 2592000
401
### This example demonstrates database size constraining ###
# global settings
# volumes
[volume:caliente]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000
[volume:frio]
path = /mnt/big_disk
maxVolumeDataSizeMB = 1000000
# indexes
[i1]
homePath = volume:caliente/i1
# homePath.maxDataSizeMB is inherited
coldPath = volume:frio/i1
# coldPath.maxDataSizeMB not specified: no limit - old-style behavior
thawedPath = $SPLUNK_DB/i1/thaweddb
[i2]
homePath = volume:caliente/i2
# overrides the default maxDataSize
homePath.maxDataSizeMB = 1000
coldPath = volume:frio/i2
# limits the cold DB's
coldPath.maxDataSizeMB = 10000
thawedPath = $SPLUNK_DB/i2/thaweddb
402
[i3]
homePath = /old/style/path
homePath.maxDataSizeMB = 1000
coldPath = volume:frio/i3
coldPath.maxDataSizeMB = 10000
thawedPath = $SPLUNK_DB/i3/thaweddb
# main, and any other large volume indexes you add sharing
large_indexes
# will together be constrained to 50TB, separately from the rest of
# the indexes
[main]
homePath=volume:large_indexes/main/db
coldPath=volume:large_indexes/main/colddb
thawedPath=$SPLUNK_DB/main/thaweddb
# large buckets and more hot buckets are desirable for higher volume
indexes
maxDataSize = auto_high_volume
maxHotBuckets = 10
[volume:s3]
storageType = remote
path = s3://example-s3-bucket/remote_volume
remote.s3.access_key = S3_ACCESS_KEY
remote.s3.secret_key = S3_SECRET_KEY
[default]
remotePath = volume:s3/$_index_name
[i4]
coldPath = $SPLUNK_DB/$_index_name/colddb
homePath = $SPLUNK_DB/$_index_name/db
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
[i5]
coldPath = $SPLUNK_DB/$_index_name/colddb
homePath = $SPLUNK_DB/$_index_name/db
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
403
inputs.conf
The following are the spec and example files for inputs.conf.
inputs.conf.spec
# Version 7.2.1
# This file contains possible settings you can use to configure inputs,
# distributed inputs such as forwarders, and file system monitoring in
# inputs.conf.
#
# There is an inputs.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place an inputs.conf in
$SPLUNK_HOME/etc/system/local/. For
# examples, see inputs.conf.example. You must restart Splunk to enable
new
# configurations.
#
# To learn more about configuration files (including precedence), see
the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
#*******
# GENERAL SETTINGS:
404
# The following settings are valid for all input types (except file
system
# change monitor, which is described in a separate section in this
file).
# You must first enter a stanza header in square brackets, specifying
the input
# type. See further down in this file for examples.
# Then, use any of the following settings.
#
# To specify global settings for Windows Event Log inputs, place them in
# the [WinEventLog] global stanza as well as the [default] stanza.
#*******
host = <string>
* Sets the host key/field to a static value for this input stanza.
* The input uses this field during parsing and indexing. It also uses
this
field at search time.
* As a convenience, the input prepends the chosen string with 'host::'.
* If set to '$decideOnStartup', sets the field to the hostname of
executing
machine. This occurs on each splunkd startup.
* If you run multiple instances of the software on the same machine
(hardware
or virtual machine), choose unique values for 'host' to differentiate
your data, ex. myhost-sh-1 or myhost-idx-2.
* Do not put the <string> value in quotes. Use host=foo, not
host="foo".
* If you remove the 'host' setting from
$SPLUNK_HOME/etc/system/local/inputs.conf
or remove $SPLUNK_HOME/etc/system/local/inputs.conf, the setting
changes to
"$decideOnStartup". Apps that need a resolved host value should use
the
'host_resolved' property in the response for the REST 'GET' call of
the
input source. This property is set to the hostname of the local Splunk
instance. It is a read only property that is not written to
inputs.conf.
* Default: "$decideOnStartup", but at installation time, the setup
logic
adds the local hostname, as determined by DNS, to the
$SPLUNK_HOME/etc/system/local/inputs.conf default stanza, which is
the
effective default value.
index = <string>
* Sets the index to store events from this input.
* Primarily used to specify the index to store events that come in
through
this input stanza.
* Default: "main" (or whatever you have set as your default index).
405
source = <string>
* Sets the source key/field for events from this input.
* Detail: Sets the source key initial value. The key is used during
parsing/indexing, in particular to set the source field during
indexing. It is also the source field used at search time.
* As a convenience, the chosen string is prepended with 'source::'.
* Avoid overriding the source key. The input layer provides a more
accurate
string to aid in problem analysis and investigation, recording the
file
from which the data was retrieved. Consider using source types,
tagging,
and search wildcards before overriding this value.
* Do not put the <string> value in quotes: Use source=foo,
not source="foo".
* Default: the input file path.
sourcetype = <string>
* Sets the sourcetype key/field for events from this input.
* Explicitly declares the source type for this input instead of letting
it be determined through automated methods. This is important for
search and for applying the relevant configuration for this data type
during parsing and indexing.
* Sets the sourcetype key initial value. The key is used during
parsing or indexing to set the source type field during
indexing. It is also the source type field used at search time.
* As a convenience, the chosen string is prepended with 'sourcetype::'.
* Do not put the <string> value in quotes: Use sourcetype=foo,
not sourcetype="foo".
* If not set, the indexer analyzes the data and chooses a source type.
* No default.
queue = [parsingQueue|indexQueue]
* Sets the queue where the input processor should deposit the events it
reads.
* Set to "parsingQueue" to apply props.conf and other parsing rules to
your data. For more information about props.conf and rules for
timestamping
and linebreaking, see props.conf and the online documentation at
http://docs.splunk.com/Documentation.
* Set to "indexQueue" to send your data directly into the index.
* Default: parsingQueue.
406
stanzas
are as follows:
queue = <value>
_raw = <value>
_meta = <value>
_time = <value>
* Inputs have special support for mapping host, source, sourcetype, and
index
to their metadata names such as host -> Metadata:Host
* Defaulting these values is not recommended, and is
generally only useful as a workaround to other product issues.
* Defaulting these keys in most cases will override the default behavior
of
input processors, but this behavior is not guaranteed in all cases.
* Values defaulted here, as with all values provided by inputs, can be
altered by transforms at parse time.
# ***********
# This section contains options for routing data using inputs.conf
rather than
# outputs.conf.
#
# NOTE: concerning routing via inputs.conf:
# This is a simplified set of routing options you can use as data comes
in.
# For more flexible options or details on configuring required or
optional
# settings, see outputs.conf.spec.
_TCP_ROUTING =
<tcpout_group_name>,<tcpout_group_name>,<tcpout_group_name>, ...
* A comma-separated list of tcpout group names.
* This setting lets you selectively forward data to specific indexer(s).
* Specify the tcpout group that the forwarder should use when forwarding
the data.
The tcpout group names are defined in outputs.conf with
[tcpout:<tcpout_group_name>].
* To forward data to all tcpout group names that have been defined in
outputs.conf, set to '*' (asterisk).
* To forward data from the "_internal" index, you must explicitly set
'_TCP_ROUTING'
to either "*" or a specific splunktcp target group.
* Default: The groups specified in 'defaultGroup' in [tcpout] stanza in
outputs.conf.
_SYSLOG_ROUTING =
<syslog_group_name>,<syslog_group_name>,<syslog_group_name>, ...
* A comma-separated list of syslog group names.
* Using this, you can selectively forward the data to specific
destinations as
407
syslog events.
* Specify the syslog group to use when forwarding the data.
The syslog group names are defined in outputs.conf with
[syslog:<syslog_group_name>].
* The destination host must be configured in outputs.conf, using
"server=[<ip>|<servername>]:<port>".
* Default: The groups present in "defaultGroup" in the [syslog] stanza
in
outputs.conf.
_INDEX_AND_FORWARD_ROUTING = <string>
* Only has effect if you use the 'selectiveIndexing' feature in
outputs.conf.
* If set for any input stanza, should cause all data coming from that
input
stanza to be labeled with this setting.
* When 'selectiveIndexing' is in use on a forwarder:
* data without this label will not be indexed by that forwarder.
* data with this label will be indexed in addition to any forwarding.
* This setting does not actually cause data to be forwarded or not
forwarded in
any way, nor does it control where the data is forwarded in
multiple-forward
path cases.
* Default: not present.
Blacklist
[blacklist:<path>]
* Protects files on the file system from being indexed or previewed.
* The input treats a file as blacklisted if the file starts with any of
the
defined blacklisted <paths>.
* Blacklisting of a file with the specified path occurs even if a
monitor
stanza defines a whitelist that matches the file path.
* The preview endpoint will return an error when asked to preview a
blacklisted file.
* The oneshot endpoint and command will also return an error.
* When a blacklisted file is monitored (monitor:// or batch://),
filestatus
endpoint will show an error.
* For fschange with the 'sendFullEvent' option enabled, contents of
blacklisted files will not be indexed.
408
Valid input types follow, along with their input-specific settings:
MONITOR:
[monitor://<path>]
* Configures a file monitor input to watch all files in <path>.
* <path> can be an entire directory or a single file.
* You must specify the input type and then the path, so put three
slashes in
your path if you are starting at the root on *nix systems (to include
the
slash that indicates an absolute path).
# Additional settings:
host_segment = <integer>
* If set to N, Splunk software sets the Nth "/"-separated segment of
the path
as 'host'.
* For example, if host_segment=3, the third segment is used.
* If the value is not an integer or is less than 1, the default 'host'
setting is used.
* On Windows machines, the drive letter and colon before the backslash
count
as one segment.
* For example, if you set host_segment=3 and the monitor path is
D:\logs\servers\host01, Splunk software sets the host as
"servers" because
that is the third segment.
* Default: Not set.
409
whitelist = <regular expression>
* If set, files from this input are monitored only if their path matches
the
specified regex.
* Takes precedence over the deprecated '_whitelist' setting, which
functions
the same way.
* Default: Not set.
crcSalt = <string>
* Use this setting to force the input to consume files that have
matching CRCs
(cyclic redundancy checks).
* By default, the input only performs CRC checks against the first
256
bytes of a file. This behavior prevents the input from indexing
the same
410
file twice, even though you might have renamed it, as with rolling
log
files, for example. Because the CRC is based on only the first
few lines of the file, it is possible for legitimately different
files
to have matching CRCs, particularly if they have identical
headers.
* If set, <string> is added to the CRC.
* If set to the literal string "<SOURCE>" (including the angle
brackets), the
full directory path to the source file is added to the CRC. This
ensures
that each file being monitored has a unique CRC. When crcSalt is
invoked,
it is usually set to <SOURCE>.
* Be cautious about using this setting with rolling log files; it could
lead
to the log file being re-indexed after it has rolled.
* In many situations, initCrcLength can be used to achieve the same
goals.
* Default: empty string.
initCrcLength = <integer>
* How much of a file, in bytes, that the input reads before trying to
identify whether it is a file that has already been seen. You might
want to
adjust this if you have many files with common headers (comment
headers,
long CSV headers, etc) and recurring filenames.
* Cannot be less than 256 or more than 1048576.
* CAUTION: Improper use of this setting will cause data to be
re-indexed. You
might want to consult with Splunk Support before adjusting this value
- the
default is fine for most installations.
* Default: 256 (bytes).
411
files, and when removing or blacklisting those files from the
monitoring
location is not a reasonable option.
* Do NOT select a time that files you want to read could reach in
age, even temporarily. Take potential downtime into consideration!
* Suggested value: 14d, which means 2 weeks
* For example, a time window in significant numbers of days or small
numbers of weeks are probably reasonable choices.
* If you need a time window in small numbers of days or hours,
there are other approaches to consider for performant monitoring
beyond the scope of this setting.
* NOTE: Most modern Windows file access APIs do not update file
modification time while the file is open and being actively written
to.
Windows delays updating modification time until the file is closed.
Therefore you might have to choose a larger time window on Windows
hosts where files may be open for long time periods.
* Value must be: <number><unit>. For example, "7d" indicates one week.
* Valid units are "d" (days), "h" (hours), "m" (minutes), and "s"
(seconds).
* Default: unset, meaning there is no threshold and no files are
ignored for modification time reasons.
followTail = [0|1]
* Whether or not the input should skip past current data in a monitored
file
for a given input stanza. This lets you skip over data in files, and
immediately begin indexing current data.
* If you set to "1", monitoring starts at the end of the file (like
*nix 'tail -f'. The input does not read any data that exists in
the file when it is first encountered. The input only reads data that
arrives after the first encounter time.
* If you set to "0", monitoring starts at the beginning of the file.
* This is an advanced setting. Contact Splunk Support before using it.
* Best practice for using this setting follows:
* Enable this setting and start the Splunk software.
* Wait enough time for the input to identify the related files.
* Disable the setting and restart.
* Do not leave 'followTail' enabled in an ongoing fashion.
* Do not use 'followTail' for rolling log files (log files that get
renamed as
they age) or files whose names or paths vary.
* Default: 0.
alwaysOpenFile = [0|1]
* Opens a file to check whether it has already been indexed, by skipping
the
modification time/size checks.
* Only useful for files that do not update modification time or size.
* Only known to be needed when monitoring files on Windows, mostly for
Internet Information Server logs.
* Configuring this setting to "1" can increase load and slow indexing.
412
Use it
only as a last resort.
* Default: 0.
time_before_close = <integer>
* The amount of time, in seconds, that the file monitor must wait for
modifications before closing a file after reaching an End-of-File
(EOF) marker.
* Tells the input not to close files that have been updated in the
past 'time_before_close' seconds.
* Default: 3.
multiline_event_extra_waittime = <boolean>
* By default, the file monitor sends an event delimiter when:
* It reaches EOF of a file it monitors and
* Ihe last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event
to
arrive.
* Set to "true" to delay sending an event delimiter until the time that
the
file monitor closes the file, as defined by the 'time_before_close'
setting,
to allow all event lines to arrive.
* Default: false.
recursive = <boolean>
* Whether or not the input monitors subdirectories that it finds within
a
monitored directory.
* If you set this setting to "false", the input does not monitor
sub-directories
* Default: true.
followSymlink = <boolean>
* Whether or not to follow any symbolic links within a monitored
directory.
* If you set this setting to "false", the input ignores symbolic links
that it finds within a monitored directory.
* If you set the setting to "true", the input follows symbolic links
and monitors files at the symbolic link destination.
* Additionally, any whitelists or blacklists that the input stanza
defines
also apply to files at the symbolic link destination.
* Default: true.
_whitelist = ...
* DEPRECATED.
* This setting is valid unless the 'whitelist' setting also exists.
_blacklist = ...
* DEPRECATED.
413
* This setting is valid unless the 'blacklist' setting also exists.
Use the 'batch' input for large archives of historic data. If you
want to continuously monitor a directory or index small archives, use
'monitor'
(see above). 'batch' reads in the file and indexes it, and then deletes
the
file on disk.
[batch://<path>]
* A one-time, destructive input of files in <path>.
* This stanza must include the 'move_policy = sinkhole' setting.
* This input reads and indexes the files, then DELETES THEM
IMMEDIATELY.
* For continuous, non-destructive inputs of files, use 'monitor'
instead.
# Additional settings:
move_policy = sinkhole
* This setting is required. You *must* include "move_policy = sinkhole"
when you define batch inputs.
* This setting causes the input to load the file destructively.
* CAUTION: Do not use the 'batch' input type for files you do not want
to
delete after indexing.
* The 'move_policy' setting exists for historical reasons, but remains
as a
safeguard. As an administrator, you must explicitly declare
that you want the data in the monitored directory (and its
sub-directories) to
be deleted after being read and indexed.
followSymlink = [true|false]
* Works similarly to the same setting for monitor, but does not delete
files
after following a symbolic link out of the monitored directory.
414
# documented above
host_regex = <regular expression>
host_segment = <integer>
crcSalt = <string>
recursive = [true|false]
whitelist = <regular expression>
blacklist = <regular expression>
initCrcLength = <integer>
TCP:
[tcp://<remote server>:<port>]
* Configures the input to listen on a specific TCP network port.
* If a <remote server> makes a connection to this instance, the input
uses this
stanza to configure itself.
* If you do not specify <remote server>, this stanza matches all
connections
on the specified port.
* Generates events with source set to "tcp:<port>", for example: tcp:514
* If you do not specify a sourcetype, generates events with sourcetype
set to "tcp-raw".
# Additional settings:
connection_host = [ip|dns|none]
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for the IP address of
the system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
splunk
system hostname.
* Default: "dns".
queueSize = <integer>[KB|MB|GB]
* The maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For
information on
persistent queues and how the 'queueSize' and 'persistentQueueSize'
settings
interact, search the online documentation for "persistent queues".
* If you set this to a value other than 0, then 'persistentQueueSize'
must
415
be larger than either the in-memory queue size (as defined by the
'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
requireHeader = <boolean>
* Whether or not to require a header be present at the beginning of
every
stream.
* This header can be used to override indexing settings.
* Default: false.
listenOnIPv6 = [no|yes|only]
* Whether or not the input listens on IPv4, IPv6, or both
* Set to 'yes' to listen on both IPv4 and IPv6 protocols.
* Set to 'only' to listen on only the IPv6 protocol.
* If not set, the input uses the setting in the [general] stanza
of server.conf.
rawTcpDoneTimeout = <seconds>
* The amount of time, in seconds, that a network connection can remain
idle
before Splunk software declares that the last event over that
connection
has been received.
* If a connection over this port remains idle for more than
'rawTcpDoneTimeout' seconds after receiving data, it adds a Done-key.
This
declares that the last event has been completely received.
* Default: 10.
[tcp:<port>]
416
* Configures the input listen on the specified TCP network port.
* This stanza is similar to [tcp://<remote server>:<port>], but listens
for
connections to the specified port from any host.
* Generates events with a source of tcp:<port>.
* If you do not specify a sourcetype, generates events with a source
type of
tcp-raw.
* This stanza supports the following settings:
connection_host = [ip|dns|none]
queueSize = <integer>[KB|MB|GB]
persistentQueueSize = <integer>[KB|MB|GB|TB]
requireHeader = <boolean>
listenOnIPv6 = [no|yes|only]
acceptFrom = <network_acl> ...
rawTcpDoneTimeout = <integer>
Data distribution:
# Global settings for splunktcp. Used on the receiving side for data
forwarded
# from a forwarder.
[splunktcp]
route = [has_key|absent_key:<key>:<queueName>;...]
* Settings for the light forwarder.
* The receiver sets these parameters automatically -- you do not need to
set
them yourself.
* The property route is composed of rules delimited by ';' (semicolon).
* The receiver checks each incoming data payload through the cooked TCP
port
against the route rules.
* If a matching rule is found, the receiver sends the payload to the
specified
<queueName>.
* If no matching rule is found, the receiver sends the payload to the
default
queue specified by any queue= for this stanza. If no queue= key is set
in
the stanza or globally, the receiver sends the events to the
parsingQueue.
enableS2SHeartbeat = <boolean>
* Specifies the global keepalive setting for all splunktcp ports.
* This option is used to detect forwarders which might have become
unavailable
417
due to network, firewall, or other problems.
* The receiver monitors each connection for presence of a heartbeat, and
if the
heartbeat is not seen for 's2sHeartbeatTimeout' seconds, it closes
the
connection.
* Default: true (heartbeat monitoring enabled).
s2sHeartbeatTimeout = <seconds>
* The amount of time, in seconds, that a receiver waits for heartbeats
from
forwarders that connect to this instance.
* The receiver closes a forwarder connection if it does not receive
a heartbeat for 's2sHeartbeatTimeout' seconds.
* Default: 600 (10 minutes).
inputShutdownTimeout = <seconds>
* The amount of time, in seconds, that a receiver waits before shutting
down
inbound TCP connections after it receives a signal to shut down.
* Used during shutdown to minimize data loss when forwarders are
connected to a
receiver.
* During shutdown, the TCP input processor waits for
'inputShutdownTimeout'
seconds and then closes any remaining open connections.
* If all connections close before the end of the timeout period,
shutdown proceeds immediately, without waiting for the timeout.
stopAcceptorAfterQBlock = <seconds>
* Specifies the time, in seconds, to wait before closing the splunktcp
port.
* If the receiver is unable to insert received data into the configured
queue
for more than the specified number of seconds, it closes the splunktcp
port.
* This action prevents forwarders from establishing new connections to
this
receiver.
* Forwarders that have an existing connection will notice the port is
closed
upon test-connections and move to other receivers.
* Once the queue unblocks, and TCP Input can continue processing data,
the
receiver starts listening on the port again.
* This setting should not be adjusted lightly as extreme values can
interact
poorly with other defaults.
* Defaults to 300 (5 minutes).
listenOnIPv6 = no|yes|only
* Select whether this receiver listens on IPv4, IPv6, or both protocols.
418
* Set this to 'yes' to listen on both IPv4 and IPv6 protocols.
* Set to 'only' to listen on only the IPv6 protocol.
* If not present, the input uses the setting in the [general] stanza
of server.conf.
negotiateNewProtocol = <boolean>
* Controls the default configuration of the 'negotiateProtocolLevel'
setting.
* DEPRECATED.
* Use the 'negotiateProtocolLevel' instead.
* Default: true.
419
exceed this value.
* This setting only applies when the new forwarder protocol is in use.
* Default: 300.
[splunktcp://[<remote server>]:<port>]
* Receivers use this input stanza.
* This is the same as the [tcp://] stanza, except the remote server is
assumed
to be a Splunk instance, most likely a forwarder.
* <remote server> is optional. If you specify it, the receiver only
listen for
data from <remote server>.
* Use of <remote server is not recommended. Use the 'acceptFrom'
setting,
which supersedes this setting.
connection_host = [ip|dns|none]
* For splunktcp, the 'host' or 'connection_host' will be used if the
remote
Splunk instance does not set a host, or if the host is set to
"<host>::<localhost>".
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
splunk
system hostname.
* Default: "ip".
compressed = <boolean>
* Whether or not the receiver communicates with the forwarder in
compressed format.
* Applies to non-Secure Sockets Layer (SSL) receiving only. There is no
compression setting required for SSL.
* If set to "true", the receiver communicates with the forwarder in
compressed format.
* If set to "true", there is no longer a requirement to also set
"compressed = true" in the outputs.conf file on the forwarder.
* Default: false.
enableS2SHeartbeat = <boolean>
* Specifies the keepalive setting for the splunktcp port.
* This option is used to detect forwarders which might have become
unavailable
due to network, firewall, or other problems.
* The receiver monitors the connection for presence of a heartbeat, and
if it
does not see the heartbeat in 's2sHeartbeatTimeout' seconds, it
closes the
420
connection.
* This overrides the default value specified at the global [splunktcp]
stanza.
* Default: true (heartbeat monitoring enabled).
s2sHeartbeatTimeout = <integer>
* The amount of time, in seconds, that a receiver waits for heartbeats
from
forwarders that connect to this instance.
* The receiver closes the forwarder connection if it does not see a
heartbeat
for 's2sHeartbeatTimeout' seconds.
* This overrides the default value specified at the global [splunktcp]
stanza.
* Default: 600 (10 minutes).
queueSize = <integer>[KB|MB|GB]
* The maximum size of the in-memory input queue.
* Default: 500KB.
negotiateNewProtocol = <boolean>
* See the description for this setting in the [splunktcp] stanza.
[splunktcp:<port>]
* This input stanza is the same as [splunktcp://[<remote
server>]:<port>], but
accepts connections from any server.
* See the online documentation for [splunktcp://[<remote
server>]:<port>] for
more information on the following supported settings:
connection_host = [ip|dns|none]
compressed = <boolean>
enableS2SHeartbeat = <boolean>
s2sHeartbeatTimeout = <integer>
queueSize = <integer>[KB|MB|GB]
negotiateProtocolLevel = <unsigned integer>
negotiateNewProtocol = <boolean>
concurrentChannelLimit = <unsigned integer>
421
token configured.
* This setting is enabled for all receiving ports.
* This setting is optional.
token = <string>
* Value of token.
[splunktcp-ssl:<port>]
* Use this stanza type if you are receiving encrypted, parsed data from
a
forwarder.
* Set <port> to the port on which the forwarder sends the encrypted
data.
* Forwarder settings are set in outputs.conf on the forwarder.
* Compression for SSL is enabled by default. On the forwarder you can
still
specify compression with the 'useClientSSLCompression' setting in
outputs.conf.
* The 'compressed' setting is used for non-SSL connections. However, if
you
still specify 'compressed' for SSL, ensure that the 'compressed'
setting is
the same as on the forwarder, as splunktcp protocol expects the same
'compressed' setting from forwarders.
connection_host = [ip|dns|none]
* For splunktcp, the host or connection_host will be used if the remote
Splunk
instance does not set a host, or if the host is set to
"<host>::<localhost>".
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
splunk
system hostname.
* Default: "ip".
compressed = <boolean>
* See the description for this setting in the [splunktcp:<port>]
stanza.
enableS2SHeartbeat = <boolean>
* See the description for this setting in the [splunktcp:<port>]
stanza.
s2sHeartbeatTimeout = <seconds>
* See the description for this setting in the [splunktcp:<port>]
stanza.
422
listenOnIPv6 = [no|yes|only]
* Select whether this receiver listens on IPv4, IPv6, or both protocols.
* Set to "yes" to listen on both IPv4 and IPv6 protocols.
* Set to "only" to listen on only the IPv6 protocol.
* If not present, the input uses the setting in the [general] stanza
of server.conf.
negotiateNewProtocol = <boolean>
* See the description for this setting in the [splunktcp] stanza.
# To specify global ssl settings, that are applicable for all ports, add
the
# settings to the SSL stanza.
# Specify any ssl setting that deviates from the global setting here.
# For a detailed description of each ssl setting, refer to the [SSL]
stanza.
serverCert = <path>
sslPassword = <password>
requireClientCert = <boolean>
sslVersions = <string>
cipherSuite = <cipher suite string>
ecdhCurves = <comma separated list of ec curves>
dhFile = <path>
allowSslRenegotiation = true|false
sslQuietShutdown = [true|false]
sslCommonNameToCheck = <commonName1>, <commonName2>, ...
423
sslAltNameToCheck = <alternateName1>, <alternateName2>, ...
[tcp-ssl:<port>]
* Use this stanza type if you are receiving encrypted, unparsed data
from a
forwarder or third-party system.
* Set <port> to the port on which the forwarder/third-party system is
sending
unparsed, encrypted data.
* To create multiple SSL inputs, you can add the following attributes
to each
[tcp-ssl:<port>] input stanza. If you do not configure a certificate in
the
port, the certificate information is pulled from the default [SSL]
stanza:
* serverCert = <path_to_cert>
* sslRootCAPath = <path_to_cert> This attribute should only be added
if you have not configured your sslRootPath in server.conf.
* sslPassword = <password>
listenOnIPv6 = [no|yes|only]
* Select whether the receiver listens on IPv4, IPv6, or both protocols.
* Set to "yes" to listen on both IPv4 and IPv6 protocols.
* Set to "only" to listen on only the IPv6 protocol.
* If not present, the receiver uses the setting in the [general] stanza
of server.conf.
[SSL]
* Set the following specifications for receiving Secure Sockets Layer
(SSL)
communication underneath this stanza name.
serverCert = <path>
* The full path to the server certificate Privacy-Enhanced Mail (PEM)
424
format file.
* PEM is the most common text-based storage format for SSL certificate
files.
* No default.
sslPassword = <string>
* The server certificate password, if it exists.
* Initially set to plain-text password.
* Upon first use, the input encrypts and rewrites the password to
$SPLUNK_HOME/etc/system/local/inputs.conf.
password = <string>
* DEPRECATED.
* Do not use this setting. Use the 'sslPassword' setting instead.
rootCA = <path>
* DEPRECATED.
* Do not use this setting. Use 'server.conf/[sslConfig]/sslRootCAPath'
instead.
* Used only if 'sslRootCAPath' is not set.
requireClientCert = <boolean>
* Determines whether a client must present an SSL certificate to
authenticate.
* Full path to the root CA (Certificate Authority) certificate store.
* The <path> must refer to a PEM format file containing one or more
root CA
certificates concatenated together.
* Default: false (if using self-signed and third-party certificates)
* Default: true (if using the default certificates, overrides the
existing
"false" setting)
sslVersions = <string>
* A comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2"
* The special version "*" selects all supported versions. The version
"tls"
selects all versions that begin with "tls".
* To remove a version from the list, prefix it with "-".
* SSLv2 is always disabled. Specifying "-ssl2" in the version list has
no effect.
* When configured in Federal Information Processing Standard (FIPS)
mode, the
"ssl3" version is always disabled, regardless of this configuration.
* The default can vary. See the 'sslVersions' setting in
$SPLUNK_HOME/etc/system/default/inputs.conf for the current default.
supportSSLV3Only = <boolean>
* DEPRECATED.
* SSLv2 is now always disabled.
* Use the 'sslVersions' setting to set the list of supported SSL
425
versions.
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the Elliptic Curve Diffie-Hellman (ECDH) curve
to
use for ECDH key negotiation.
* Splunk only supports named curves that have been specified by their
SHORT name.
* The list of valid named curves by their short and long names
can be obtained by running this CLI command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
dhFile = <path>
* Full path to the Diffie-Hellman parameter file.
* DH group size should be no less than 2048 bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Default: not set.
dhfile = <path>
* DEPRECATED.
* Use the 'dhFile' setting instead.
allowSslRenegotiation = <boolean>
* Whether or not to let SSL clients renegotiate their connections.
* In the SSL protocol, a client might request renegotiation of the
connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
426
attempts, which breaks the connection.
* This limits the amount of CPU a single TCP connection can use, but it
can
cause connectivity problems, especially for long-lived connections.
* Default: true.
sslQuietShutdown = <boolean>
* Enables quiet shutdown mode in SSL.
* Default: false.
UDP:
[udp://<remote server>:<port>]
* Similar to the [tcp://] stanza, except that this stanza causes the
Splunk
instance to listen on a UDP port.
* Only one stanza per port number is currently supported.
* Configures the instance to listen on a specific port.
* If you specify <remote server>, the specified port only accepts data
from that host.
* If <remote server> is empty - [udp://<port>] - the port accepts data
sent
from any host.
* The use of <remote server> is not recommended. Use the 'acceptFrom'
setting, which supersedes this setting.
427
* Generates events with source set to udp:portnumber, for example:
udp:514
* If you do not specify a sourcetype, generates events with sourcetype
set
to udp:portnumber.
# Additional settings:
connection_host = [ip|dns|none]
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
Splunk
system hostname.
* If the input is configured with a 'sourcetype' that has a transform
that
overrides the 'host' field e.g. 'sourcetype=syslog', that will take
precedence over the host specified here.
* Default: "ip"
_rcvbuf = <integer>
* The receive buffer, in bytes, for the UDP port.
* If you set the value to 0 or a negative number, the input ignores the
value.
* If the default value is too large for an OS, the instance tries to
set
the value to 1572864/2. If that value is also too large, the instance
retries with 1572864/(2*2). It continues to retry by halving the value
until
it succeeds.
* Default: 1572864.
no_priority_stripping = <boolean>
* Whether or not the input strips <priority> syslog fields from events
it
receives over the syslog input.
* If you set this setting to true, the instance does NOT strip the
<priority>
syslog field from received events.
* NOTE: Do NOT set this setting if you want to strip <priority>.
* Default: false.
no_appending_timestamp = <boolean>
* Whether or not to append a timestamp and host to received events.
* If you set this setting to true, the instance does NOT append a
timestamp
and host to received events.
* NOTE: Do NOT set this setting if you want to append timestamp and
host
to received events.
428
* Default: false.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For
information on
persistent queues and how the 'queueSize' and 'persistentQueueSize'
settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize'
must
be larger than either the in-memory queue size (as defined by the
'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
[udp:<port>]
* This input stanza is the same as [udp://<remote server>:<port>], but
does
not have a <remote server> restriction.
* See the documentation for [udp://<remote server>:<port>] to configure
429
supported settings:
connection_host = [ip|dns|none]
_rcvbuf = <integer>
no_priority_stripping = [true|false]
no_appending_timestamp = [true|false]
queueSize = <integer>[KB|MB|GB]
persistentQueueSize = <integer>[KB|MB|GB|TB]
listenOnIPv6 = <no | yes | only>
acceptFrom = <network_acl> ...
[fifo://<path>]
* This stanza configures the monitoring of a FIFO at the specified path.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For
information on
persistent queues and how the 'queueSize' and 'persistentQueueSize'
settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize'
must
be larger than either the in-memory queue size (as defined by the
'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
Scripted Input:
[script://<cmd>]
* Runs <cmd> at a configured interval (see below) and indexes the output
that <cmd> returns.
* The <cmd> must reside in one of the following directories:
* $SPLUNK_HOME/etc/system/bin/
* $SPLUNK_HOME/etc/apps/$YOUR_APP/bin/
* $SPLUNK_HOME/bin/scripts/
430
* The path to <cmd> can be an absolute path, make use of an environment
variable such as $SPLUNK_HOME, or use the special pattern of an
initial '.'
as the first directory to indicate a location inside the current app.
* The '.' specification must be followed by a platform-specific
directory
separator.
* For example, on UNIX:
[script://./bin/my_script.sh]
Or on Windows:
[script://.\bin\my_program.exe]
This '.' pattern is strongly recommended for app developers, and
necessary
for operation in search head pooling environments.
* <cmd> can also be a path to a file that ends with a ".path" suffix. A
file
with this suffix is a special type of pointer file that points to a
command
to be run. Although the pointer file is bound by the same location
restrictions mentioned above, the command referenced inside it can
reside
anywhere on the file system. The .path file must contain exactly one
line:
the path to the command to run, optionally followed by command-line
arguments. The file can contain additional empty lines and lines that
begin
with '#'. The input ignores these lines.
passAuth = <username>
431
* User to run the script as.
* If you provide a username, the instance generates an auth token for
that
user and passes it to the script through stdin.
* No default.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For
information on
persistent queues and how the 'queueSize' and 'persistentQueueSize'
settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize'
must
be larger than either the in-memory queue size (as defined by the
'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
index = <string>
* The index where the scripted input sends the data.
* NOTE: The script passes this parameter as a command-line argument to
<cmd> in
the format: -index <index name>.
If the script does not need the index info, it can ignore this
argument.
* If you do not specify an index, the script uses the default index.
send_index_as_argument_for_path = <boolean>
* Whether or not to pass the index as an argument when specified for
stanzas that begin with 'script://'
* When this setting is "true", the script passes the argument as
'-index <index name>'.
* To avoid passing the index as a command line argument, set this to
"false".
* Default: true.
start_by_shell = <boolean>
* Whether or not to run the specified command through the operating
system
shell or command prompt.
* If you set this setting to "true", the host operating system runs the
specified command through the OS shell ("/bin/sh -c" on *NIX,
"cmd.exe /c" on Windows.)
* If you set the setting to "false", the input runs the program
directly
432
without attempting to expand shell metacharacters.
* You might want to explicitly set the setting to "false" for scripts
that you know do not need UNIX shell metacharacter expansion. This is
a Splunk best practice.
* Default: true (on *NIX hosts)
* Default: false (on Windows hosts).
#
# The file system change monitor has been deprecated as of Splunk
Enterprise
# version 5.0 and might be removed in a future version of the product.
#
# You cannot simultaneously monitor a directory with both the 'fschange'
# and 'monitor' stanza types.
[fschange:<path>]
* Monitors changes (such as additions, updates, and deletions) to this
directory and any of its sub-directories.
* <path> is the direct path. Do not preface it with '//' like with
other inputs.
* Sends an event for every change.
# Additional settings:
# NOTE: The 'fschange' stanza type does not use the same settings as
# other input types. It uses only the following settings:
index = <string>
* The index where the input sends the data.
* Default: _audit (if you do not set 'signedaudit' or
set 'signedaudit' to "false")
* Default: the default index (in all other cases)
signedaudit = <boolean>
* Whether or not to send cryptographically signed add/update/delete
events.
* If this setting is "true", the input does the following to
events that it generates:
* Puts the events in the _audit index.
* Sets the event sourcetype to 'audittrail'
* If this setting is "false", the input:
* Places events in the default index.
* Sets the sourcetype to whatever you specify (or "fs_notification"
by default).
* You must set 'signedaudit' to "false" if you want to set the index
for
fschange events.
* You must also enable auditing in audit.conf.
433
* Default: false.
recurse = <boolean>
* Whether or not the fschange input should look through all
sub-directories
for changes to files in a directory.
* If this setting is "true", the input recurses through
sub-directories within the directory specified in [fschange].
* Default: true.
followLinks = <boolean>
* Whether or not the fschange input should follow any symbolic
links it encounters.
* If you set this setting to "true", the input follows symbolic links.
* CAUTION: Do not set this setting to "true" unless you can confirm that
doing so will not create a file system loop (For example, in
Directory A, symbolic link B points back to Directory A.)
* Default: false.
pollPeriod = <integer>
* How often, in seconds, to check a directory for changes.
* Default: 3600 (1 hour).
hashMaxSize = <integer>
* Tells the fschange input to calculate a SHA256 hash for every file
that
is this size or smaller, in bytes.
* The input uses this hash as an additional method for detecting changes
to the
file/directory.
* Default: -1 (disabled).
fullEvent = <boolean>
* Whether or not to send the full event if the input detects an add or
update change.
* Set to true to send the full event if an add or update change is
detected.
* Further qualified by the 'sendEventMaxSize' setting.
* Default: false.
sendEventMaxSize = <integer>
* The maximum size, in bytes, that an fschange event can be for the
input to
send the full event to be indexed.
* Limits the size of event data that the fschange input sends.
* This limits the size of indexed file data.
434
* Default: -1 (unlimited).
sourcetype = <string>
* Sets the source type for events from this input.
* The input automatically prepends "sourcetype=" to <string>.
* Default: "audittrail" (if you set the 'signedaudit' setting to
"true".)
* Default: "fs_notification" (if you set the 'signedaudit' setting to
"false".)
host = <string>
* Sets the host name for events from this input.
* Defaults to whatever host sent the event.
filesPerDelay = <integer>
* The number of files that the fschange input processes between
processing
delays, as specified by the 'delayInMills' setting.
* After a delay of 'delayInMills' milliseconds, the fschange input
processes
'filesPerDelay' files, then waits 'delayInMills' milliseconds again
before
repeating this process.
* This setting helps throttle file system monitoring so it consumes less
CPU.
* Default: 10.
delayInMills = <integer>
* The delay, in milliseconds, that the fschange input waits between
processing 'filesPerDelay' files.
* After a delay of 'delayInMills' milliseconds, the fschange input
processes
'filesPerDelay' files, then waits 'delayInMills' milliseconds again
before
repeating this process.
* This setting helps throttle file system monitoring so it consumes less
CPU.
* Default: 100.
[filter:<filtertype>:<filtername>]
* Defines a filter of type <filtertype> and names it <filtername>.
* <filtertype>:
* Filter types are either 'blacklist' or 'whitelist.'
* A whitelist filter processes all file names that match the
regular expression list that you define within the stanza.
435
* A blacklist filter skips all file names that match the
regular expression list.
* <filtername>
* The fschange input uses filter names that you specify with
the 'filters' setting for a given fschange stanza.
* You can specify multiple filters buy separating them with commas.
[http]
port = <positive integer>
* The event collector data endpoint server port.
* Default: 8088.
disabled = [0|1]
* Whether or not the event collector input is active.
* Set this setting to "1" to disable the input, and "0" to enable it.
* Default: 1 (disabled).
outputgroup = <string>
* The name of the output group that the event collector forwards data
to.
* Default: empty string.
useDeploymentServer = [0|1]
* Whether or not the HTTP event collector input should write its
configuration to a deployment server repository.
* When you enable this setting, the input writes its
configuration to the directory that you specify with the
'repositoryLocation' setting in serverclass.conf.
* You must copy the full contents of the splunk_httpinput app directory
to this directory for the configuration to work.
* When enabled, only the tokens defined in the splunk_httpinput app in
this
repository will be viewable and editable through the API and Splunk
436
Web.
* When disabled, the input writes its configuration to
$SPLUNK_HOME/etc/apps by default.
* Default: 0 (disabled).
index = <string>
* The default index to use.
* Default: the "default" index.
sourcetype = <string>
* The default source type for the events that the input generates.
* If you do not specify a sourcetype, the input does not set a
sourcetype
for events it generates.
enableSSL = [0|1]
* Whether or not the HTTP Event Collector uses SSL.
* HEC shares SSL settings with the Splunk management server and cannot
have
SSL enabled when the Splunk management server has SSL disabled.
* Default: 1 (enabled).
dedicatedIoThreads = <number>
* The number of dedicated input/output threads in the event collector
input.
* Default: 0 (The input uses a single thread).
replyHeader.<name> = <string>
* Adds a static header to all HTTP responses that this server generates.
* For example, "replyHeader.My-Header = value" causes the
response header "My-Header: value" to be included in the reply to
every HTTP request made to the event collector endpoint server.
* No default.
maxSockets = <integer>
* The number of HTTP connections that the HTTP event collector input
accepts simultaneously.
* Set this setting to constrain resource usage.
* If you set this setting to 0, the input automatically sets it to
one third of the maximum allowable open files on the host.
* If this value is less than 50, the input sets it to 50. If this value
is greater than 400000, the input sets it to 400000.
* If set to a negative value, the input does not enforce a limit on
connections.
* Default: 0.
maxThreads = <integer>
* The number of threads that can be used by active HTTP transactions.
* Set this to constrain resource usage.
* If you set this setting to 0, the input automatically sets the limit
to
one third of the maximum allowable threads on the host.
437
* If this value is less than 20, the input sets it to 20. If this value
is
greater than 150000, the input sets it to 150000.
* If the 'maxSockets' setting has a positive value and 'maxThreads'
is greater than 'maxSockets', then the input sets 'maxThreads' to be
equal
to 'maxSockets'.
* If set to a negative value, the input does not enforce a limit on
threads.
* Default: 0.
keepAliveIdleTimeout = <integer>
* How long, in seconds, that the HTTP Event Collector input lets a
keep-alive
connection remain idle before forcibly disconnecting it.
* If this value is less than 7200, the input sets it to 7200.
* Default: 7200.
busyKeepAliveIdleTimeout = <integer>
* How long, in seconds, that the HTTP Event Collector lets a keep-alive
connection remain idle while in a busy state before forcibly
disconnecting it.
* CAUTION: Setting this to a value that is too large
can result in file descriptor exhaustion due to idling connections.
* If this value is less than 12, the input sets it to 12.
* Default: 12.
serverCert = <path>
* The full path to the server certificate PEM format file.
* The same file may also contain a private key.
* The Splunk software automatically generates certificates when it first
starts.
* You may replace the auto-generated certificate with your own
certificate.
* Default: $SPLUNK_HOME/etc/auth/server.pem.
sslKeysfile = <filename>
* DEPRECATED.
* Use the 'serverCert' setting instead.
* The file that contains the SSL keys. Splunk software looks for this
file
in the directory specified by 'caPath'.
* Default: server.pem.
sslPassword = <string>
* The server certificate password.
* Initially set to a plain-text password.
* Upon first use, Splunk software encrypts and rewrites the password.
* Default: "password".
sslKeysfilePassword = <string>
* DEPRECATED.
438
* Use the 'sslPassword' setting instead.
caCertFile = <string>
* DEPRECATED.
* Use the 'server.conf:[sslConfig]/sslRootCAPath' setting instead.
* Used only if you do not set the 'sslRootCAPath' setting.
* Specifies the file name (relative to 'caPath') of the CA
(Certificate Authority) certificate PEM format file that contains one
or
more certificates concatenated together.
* Default: cacert.pem.
caPath = <string>
* DEPRECATED.
* Use absolute paths for all certificate files.
* If certificate files given by other settings in this stanza are not
absolute
paths, then they will be relative to this path.
* Default: $SPLUNK_HOME/etc/auth.
cipherSuite = <string>
* The cipher string to use for the HTTP Event Collector input.
* Use this setting to ensure that the server does not accept
connections using
weak encryption protocols.
* If you set this setting, the input uses the specified cipher string
for
the HTTP server.
* If you do not set the setting, the input uses the default cipher
string that OpenSSL provides.
listenOnIPv6 = [no|yes|only]
* Whether or not this input listens on IPv4, IPv6, or both.
* Set to "no" to make the input listen only on the IPv4 protocol.
* Set to "yes" to make the input listen on both IPv4 and IPv6
protocols.
* Set to "only" to make the input listen on only the IPv6 protocol.
* If not present, the input uses the setting in the [general] stanza
439
of server.conf.
requireClientCert = <boolean>
* Requires that any client connecting to the HEC port has a certificate
that
can be validated by the certificate authority specified in the
'caCertFile' setting.
* Default: false.
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the ECDH curve to use for ECDH key negotiation.
* Splunk software only supports named curves that have been specified by
their
SHORT names.
* The list of valid named curves by their short or long names
can be obtained by executing this command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
440
crossOriginSharingPolicy = <origin_acl> ...
* A list of the HTTP Origins for which to return Access-Control-Allow-*
Cross-origin Resource Sharing (CORS) headers.
* These headers tell browsers that web applications at those sites
can be trusted to make requests to the REST interface.
* The origin is passed as a URL without a path component (for example
"https://app.example.com:8000").
* This setting can take a list of acceptable origins, separated
by spaces and/or commas.
* Each origin can also contain wildcards for any part. Examples:
* *://app.example.com:* (either HTTP or HTTPS on any port)
* https://*.example.com (any host under example.com, including
example.com itself).
* An address can be prefixed with a '!' to negate the match, with
the first matching origin taking precedence. Example:
* "!*://evil.example.com:* *://*.example.com:*" to not avoid
matching one host in a domain.
* "*" matches all origins.
* Default: empty string.
forceHttp10 = [auto|never|always]
* Whether or not the REST HTTP server forces clients that connect
to it to use the HTTP 1.0 specification for web communications.
* When set to "always", the REST HTTP server does not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto", it does this only if the client did not send
a User-Agent header, or if the user agent is known to have bugs
in its support of HTTP/1.1.
* When set to "never" it always allows HTTP 1.1, even to
clients it suspects might be buggy.
* Default: "auto".
441
* Subject Alternate Names are effectively extended descriptive
fields in SSL certs beyond the commonName. A common practice for
HTTPS certs is to use these values to store additional valid
hostnames or domains where the cert should be considered valid.
* Accepts a comma-separated list of Subject Alternate Names to consider
valid.
* Items in this list are never validated against the SSL Common Name.
* This feature does not work with the deployment server and client
communication over SSL.
* This setting is optional.
* Default: empty string (no alternate name checking.)
sendStrictTransportSecurityHeader = <boolean>
* Whether or not to force inbound connections to always use SSL with
the "Strict-Transport-Security" header..
* If set to "true", the REST interface sends a
"Strict-Transport-Security"
header with all responses to requests made over SSL.
* This can help prevent a client being tricked later by a
Man-In-The-Middle
attack to accept a non-SSL request. However, this requires a
commitment that
no non-SSL web hosts will ever be run on this hostname on any port.
For
example, if Splunk Web is in default non-SSL mode this can break the
ability of the browser to connect to it. Enable with caution.
* Default: false.
allowSslCompression = <boolean>
* Whether or not to allow data compression over SSL.
* If set to "true", the server will allow clients to negotiate
SSL-layer data compression.
* Default: true.
allowSslRenegotiation = <boolean>
* Whether or not to let SSL clients renegotiate their connections.
* In the SSL protocol, a client may request renegotiation of the
connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, which breaks the connection.
* This limits the amount of CPU a single TCP connection can use, but it
can
cause connectivity problems, especially for long-lived connections.
* Default: true.
ackIdleCleanup = <boolean>
* Whether or not to remove ACK channels that have been idle after a
period
of time, as defined by the 'maxIdleTime' setting.
* If set to "true", the server removes the ACK channels that are idle
for 'maxIdleTime' seconds.
442
* Defaul: false.
maxIdleTime = <integer>
* The maximum amount of time, in seconds, that ACK channels can be idle
before they are removed.
* If 'ackIdleCleanup' is "true", the system removes ACK channels that
have
been idle for 'maxIdleTime' seconds.
* Default: 600 (10 minutes.)
channel_cookie = <string>
* The name of the cookie to use when sending data with a specified
channel ID.
* The value of the cookie will be the channel sent. For example, if you
have
set 'channel_cookie=foo' and sent a request with channel ID set to
'bar',
then you will have a cookie in the response with the value 'foo=bar'.
* If no channel ID is present in the request, then no cookie will be
returned.
* This setting is to be used for load balancers (for example, AWS ELB)
that can
only provide sticky sessions on cookie values and not general header
values.
* If no value is set (the default), then no cookie will be returned.
* Default: empty string (no cookie).
[http://name]
token = <string>
* The value of the HEC token.
* HEC uses this token to authenticate inbound connections. Your
application
or web client must present this token when attempting to connect to
HEC.
* No default.
disabled = [0|1]
* Whether or not this token is active.
* Defaults to 0 (enabled).
443
description = <string>
* A human-readable description of this token.
* Default: empty string.
indexes = <string>
* The indexes that events for this token can go to.
* If you do not specify this value, the index list is empty, and any
index
can be used.
* Default: Not set.
index = <string>
* The default index to use for this token.
* Default: the default index.
sourcetype = <string>
* The default sourcetype to use if it is not specified in an event.
* Default: empty string.
outputgroup = <string>
* The name of the forwarding output group to send data to.
* Default: empty string.
queueSize = <integer>[KB|MB|GB]
* The maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For
information on
persistent queues and how the 'queueSize' and 'persistentQueueSize'
settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize'
must
be larger than either the in-memory queue size (as defined by the
'queueSize' setting in inputs.conf or 'maxSize' settings in [queue]
stanzas
in server.conf).
* Default: 0 (no persistent queue).
connection_host = [ip|dns|proxied_ip|none]
* Specifies the host if an event doesn't have a host set.
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system
sending the data.
* "proxied_ip" checks whether an X-Forwarded-For header was sent
(presumably by a proxy server) and if so, sets the host to that value.
Otherwise, the IP address of the system sending the data is used.
444
* "none" leaves the host as specified in the HTTP header.
* No default.
useACK = <boolean>
* When set to "true", acknowledgment (ACK) is enabled. Events in a
request will
be tracked until they are indexed. An events status (indexed or not)
can be
queried from the ACK endpoint with the ID for the request.
* When set to false, acknowledgment is not enabled.
* This setting can be set at the stanza level.
* Default: false.
allowQueryStringAuth = <boolean>
* Enables or disables sending authorization tokens with a query string.
* This is a token level configuration. It may only be set for
a particular token.
* To use this feature, set to "true" and configure the client
application to
include the token in the query string portion of the URL they use to
send data to HEC in the following format:
"https://<URL>?<your=query-string>&token=<your-token>" or
"https://<URL>?token=<your-token>" if the token is the first element
in the
query string.
* If a token is sent in both the query string and an HTTP header, the
token in
the query string takes precedence, even if this feature is disabled.
In
other words, if a token is present in the query string, any token in
the
header for that request will not be used.
* NOTE: Query strings may be observed in transit and/or logged in
cleartext.
There is no confidentiality protection for the transmitted tokens.
* Before using this in production, consult security personnel in
your
organization to understand and plan to mitigate the risks.
* At a minimum, always use HTTPS when you enable this feature.
Check your
client application, proxy, and logging configurations to confirm
that
the token is not logged in clear text.
* Give minimal access permissions to the token in HEC and restrict
the
use of the token only to trusted client applications.
* Default: false.
445
WINDOWS INPUTS:
#*******
# The following Windows input specifications are for parsing on
non-Windows
# platforms.
#*******
Performance Monitor
[perfmon://<name>]
446
object = <string>
* A valid Performance Monitor object as defined within Performance
Monitor (for example, "Process," "Server," "PhysicalDisk.")
* You can specify a single valid Performance Monitor object or use a
regular expression (regex) to specify multiple objects.
* This setting is required, and the input will not run if the setting is
not present.
* No default.
interval = <integer>
* How often, in seconds, to poll for new data.
* This setting is required, and the input will not run if the setting is
not present.
* The recommended setting depends on the Performance Monitor object,
counter(s), and instance(s) that you define in the input, and how much
performance data you need.
* Objects with numerous instantaneous or per-second counters, such
as "Memory", "Processor", and "PhysicalDisk" should have shorter
interval times specified (anywhere from 1-3 seconds).
* Less volatile counters such as "Terminal Services", "Paging File",
and "Print Queue" can have longer intervals configured.
* Default: 300.
mode = [single|multikv]
* Specifies how the performance monitor input generates events.
* Set to "single" to print each event individually.
* Set to "multikv" to print events in multikv (formatted multiple
key-value pair) format.
447
* Default: "single".
stats = <average;count;dev;min;max>
* Reports statistics for high-frequency performance
sampling.
* This is an advanced setting.
* Acceptable values are: average, count, dev, min, max.
* You can specify multiple values by separating them with semicolons.
* If not specified, the input does not produce high-frequency sampling
statistics.
* Default: not set (disabled).
disabled = [0|1]
* Specifies whether or not the input is enabled.
* 1 to disable the input, 0 to enable it.
* Defaults to 0 (enabled).
disabled = [0|1]
* Specifies whether or not the input is enabled.
* Set to 1 to disable the input, and 0 to enable it.
* Default: 0 (enabled).
showZeroValue = [0|1]
* Specfies whether or not zero-value event data should be collected.
* Set to 1 to capture zero value event data, and 0 to ignore such data.
* Default: 0 (ignore zero value event data)
useEnglishOnly = <boolean>
* Controls which Windows Performance Monitor API the input uses.
* If set to "true", the input uses PdhAddEnglishCounter() to add the
counter string. This ensures that counters display in English
regardless of the Windows machine locale.
* If set to "false", the input uses PdhAddCounter() to add the counter
string.
* NOTE: if you set this setting to true, the 'object' setting does not
accept a regular expression as a value on machines that have a
non-English
locale.
* Default: false.
448
formatString = <string>
* Controls the print format for double-precision statistic counters.
* Do not use quotes when specifying this string.
* Default: "%.20g" (without quotes).
###
# Direct Access File Monitor (does not use file handles)
# For Windows systems only.
###
[MonitorNoHandle://<path>]
disabled = [0|1]
* Whether or not the input is enabled.
* Default: 0 (enabled).
index = <string>
* Specifies the index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinEventLog://<name>]
start_from = <string>
449
* How the input should chronologically read the Event Log channels.
* If you set this setting to "oldest", the input reads Windows event
logs
from oldest to newest.
* If you set this setting to "newest" the input reads Windows event
logs
in reverse, from newest to oldest. Once the input consumes the backlog
of
events, it stops.
* If you set this setting to "newest", and at the same time set the
"current_only" setting to 0, the combination can result in the input
indexing duplicate events.
* Do not set this setting to "newest" and at the same time set the
"current_only" setting to 1. This results in the input not collecting
any events because you instructed it to read existing events from
oldest
to newest and read only incoming events concurrently (A logically
impossible combination.)
* Default: "oldest".
use_old_eventlog_api = <boolean>
* Whether or not to read Event Log events with the Event Logging API.
* This is an advanced setting. Contact Splunk Support before you change
it.
* If set to "true", the input uses the Event Logging API (instead of
the
Windows Event Log API) to read from the Event Log on Windows Server
2008,
Windows Vista, and later installations.
* Default: false (Use the API that is specific to the OS.)
use_threads = <integer>
* Specifies the number of threads, in addition to the default writer
thread,
that can be created to filter events with the blacklist/whitelist
regular expression.
* This is an advanced setting. Contact Splunk Support before you change
it.
* The maximum number of threads is 15.
* Default: 0
thread_wait_time_msec = <integer>
* The interval, in milliseconds, between attempts to re-read Event Log
files
when a read error occurs.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Default: 5000
suppress_checkpoint = <boolean>
* Whether or not the Event Log strictly follows the
'checkpointInterval'
450
setting when it saves a checkpoint.
* This is an advanced setting. Contact Splunk Support before you change
it.
* By default, the Event Log input saves a checkpoint from between zero
and 'checkpointInterval' seconds, depending on incoming event volume.
If you set this setting to "true", that does not happen.
* Default: false
suppress_sourcename = <boolean>
* Whether or not to exclude the 'sourcename' field from events.
* This is an advanced setting. Contact Splunk Support before you change
it.
* When set to true, the input excludes the 'sourcename' field from
events
and thruput performance (the number of events processed per second)
improves.
* Default: false
suppress_keywords = <boolean>
* Whether or not to exclude the 'keywords' field from events.
* This is an advanced setting. Contact Splunk Support before you change
it.
* When set to true, the input excludes the 'keywords' field from events
and
thruput performance (the number of events processed per second)
improves.
* Default: false
suppress_type = <boolean>
* Whether or not to exclude the 'type' field from events.
* This is an advanced setting. Contact Splunk Support before you change
it.
* When set to true, the input excludes the 'type' field from events and
thruput performance (the number of events processed per second)
improves.
* Default: false
suppress_task = <boolean>
* Whether or not to exclude the 'task' field from events.
* This is an advanced setting. Contact Splunk Support before you change
it.
* When set to true, the input excludes the 'task' field from events and
thruput performance (the number of events processed per second)
improves.
* Default: false
suppress_opcode = <boolean>
* Whether or not to exclude the 'opcode' field from events.
When set to true, the input excludes the 'opcode' field from events
and thruput performance (the number of events processed per second)
improves.
* This is an advanced setting. Contact Splunk Support before you change
451
it.
* Default: false
current_only = [0|1]
* Whether or not to acquire only events that arrive while the instance
is
running.
* If you set this setting to 1, the input only acquires events that
arrive
while the instance runs and the input is enabled. The input does not
read
data which was stored in the Windows Event Log while the instance was
not
running. This means that there will be gaps in the data if you restart
the
instance or experiences downtime.
* If you set the setting to 0, the input first gets all existing events
already stored in the log that have higher event IDs (have arrived
more
recently) than the most recent events acquired. The input then
monitors
events that arrive in real time.
* If you set this setting to 0, and at the same time set the
'start_from' setting to "newest", the combination can result in the
indexing of duplicate events.
* Do not set this setting to 1 and at the same time set the
'start_from' setting to "newest". This results in the input not
collecting
any events because you instructed it to read existing events from
oldest
to newest and read only incoming events concurrently (A logically
impossible combination.)
* Default: 0 (false, gathering stored events first before monitoring
live events.)
batch_size = <integer>
* How many Windows Event Log items to read per request.
* If troubleshooting identifies that the Event Log input is a
bottleneck in
acquiring data, increasing this value can help.
* NOTE: Splunk Support has seen cases where large values can result
in a
stall in the Event Log subsystem. If you increase this value
significantly, monitor closely for trouble.
* In local and customer acceptance testing, a value of 10 was acceptable
for both throughput and reliability.
* Default: 10.
checkpointInterval = <integer>
* How often, in seconds, that the Windows Event Log input saves a
checkpoint.
* Checkpoints store the eventID of acquired events. This lets the input
452
continue monitoring at the correct event after a shutdown or outage.
* Default: 0.
disabled = [0|1]
* Whether or not the input is enabled.
* Set to 1 to disable the input, and 0 to enable it.
* Default: 0 (enabled).
evt_resolve_ad_obj = [0|1]
* How the input should interact with Active Directory while indexing
Windows
Event Log events.
* If you set this setting to 1, the input resolves the Active
Directory Security IDentifier (SID) objects to their canonical names
for
a specific Windows Event Log channel.
* If you enable the setting, the rate at which the input reads events
on high-traffic Event Log channels can decrease. Latency can also
increase
during event acquisition. This is due to the overhead involved in
performing
AD translations.
* When you set this setting to 1, you can optionally specify the domain
controller name or dns name of the domain to bind to with the
'evt_dc_name'
setting. The input connects to that domain controller to resolve the
AD
objects.
* If you set this setting to 0, the input does not attempt any
resolution.
* Default: 0 (disabled) for all channels.
evt_dc_name = <string>
* Which Active Directory domain controller to bind to for AD object
resolution.
* If you prefix a dollar sign to a value (for example,
$my_domain_controller),
the input interprets the value as an environment variable. If the
environment variable has not been defined on the host, it is the same
as if the value is blank.
* This setting is optional.
* This setting can be set to the NetBIOS name of the domain controller
or the fully-qualified DNS name of the domain controller. Either name
type can, optionally, be preceded by two backslash characters. The
following
examples represent correctly formatted domain controller names:
* "FTW-DC-01"
* "\\FTW-DC-01"
* "FTW-DC-01.splunk.com"
* "\\FTW-DC-01.splunk.com"
* $my_domain_controller
453
evt_dns_name = <string>
* The fully-qualified DNS name of the domain that the input should bind
to for
AD object resolution.
* This setting is optional.
evt_resolve_ad_ds = [auto|PDC]
* How the input should choose the domain controller to bind for
AD resolution.
* This setting is optional.
* If set to PDC, the input only contacts the primary domain controller
to resolve AD objects.
* If set to auto, the input lets Windows chose the best domain
controller.
* If you set the 'evt_dc_name' setting, the input ignores this setting.
* Defaults to 'auto' (let Windows determine the domain controller to
use.)
evt_ad_cache_disabled = [0|1]
* Enables or disables the AD object cache.
* Default: 0 (enabled).
evt_ad_cache_exp = <integer>
* The expiration time, in seconds, for AD object cache entries.
* This setting is optional.
* The minimum allowed value is 10 and the maximum allowed value is
31536000.
* Default: 3600 (1 hour).
evt_ad_cache_exp_neg = <integer>
* The expiration time, in seconds, for negative AD object cache entries.
* This setting is optional.
* The minimum allowed value is 10 and the maximum allowed value is
31536000.
* Default: 10.
evt_ad_cache_max_entries = <integer>
* The maximum number of AD object cache entries.
* This setting is optional.
* The minimum allowed value is 10 and the maximum allowed value is
40000.
* Default: 1000.
evt_sid_cache_disabled = [0|1]
* Enables or disables account Security IDentifier (SID) cache.
* This setting is global. It affects all Windows Event Log stanzas.
* Default: 0.
454
* This setting is global. It affects all Windows Event Log stanzas.
* The minimum allowed value is 10 and the maximum allowed value is
31536000.
* Default: 3600.
index = <string>
* Specifies the index that this input should send the data to.
* This setting is optional.
* Default: The default index.
######
# Event Log filtering
#
# Filtering at the input layer is desirable to reduce the total
# processing load in network transfer and computation on the Splunk
# nodes that acquire and processing Event Log data.
######
455
blacklist6 = <list of eventIDs> | key=regex [key=regex]
blacklist7 = <list of eventIDs> | key=regex [key=regex]
blacklist8 = <list of eventIDs> | key=regex [key=regex]
blacklist9 = <list of eventIDs> | key=regex [key=regex]
* key=regex format:
* A whitespace-separated list of Event Log components to match, and
regular expressions to match against against them.
* There can be one match expression or multiple expressions per line.
* The key must belong to the set of valid keys provided below.
* The regex consists of a leading delimiter, the regex expression, and
a
trailing delimeter. Examples: %regex%, *regex*, "regex"
* When multiple match expressions are present, they are treated as a
logical AND. In other words, all expressions must match for the
line to
apply to the event.
456
* If the value represented by the key does not exist, it is not
considered
a match, regardless of the regex.
* Example:
whitelist = EventCode=%^200$% User=%jrodman%
Include events only if they have EventCode 200 and relate to User
jrodman
* The following keys are equivalent to the fields that appear in the
text of
the acquired events:
* Category, CategoryString, ComputerName, EventCode, EventType,
Keywords,
LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName,
TaskCategory, Type, User
* There are two special keys that do not appear literally in the event.
* $TimeGenerated: The time that the computer generated the event
* $Timestamp: The time that the event was received and recorded by the
Event Log service.
* The 'EventType' key is only available on Windows Server 2003 /
Windows XP and earlier.
* The 'Type' key is only available on Windows Server 2008 /
Windows Vista and later.
* For a detailed definition of these keys, see the
"Monitor Windows Event Log Data" topic in the online documentation.
suppress_text = [0|1]
* Whether or not to include the description of the event text for a
given Event Log event.
* This setting is optional.
* Set this setting to 1 to suppress the inclusion of the event
text description.
* Set this value to 0 to include the event text description.
* Default: 0.
renderXml = <boolean>
* Whether or not the input returns the event data in XML (eXtensible
Markup
Language) format or in plain text.
* Set this to "true" to render events in XML.
* Set this to "false" to output events in plain text.
* If you set this setting to "true", you should also set the
'suppress_text',
'suppress_sourcename', 'suppress_keywords', 'suppress_task', and
'suppress_opcode' settings to "true" to improve thruput performance.
* Default: false.
457
Active Directory Monitor
[admon://<name>]
targetDc = <string>
* The fully qualified domain name of a valid, network-accessible
Active Directory domain controller (DC).
* This setting is case sensitive. Do not use 'targetdc' or 'targetDC',
but rather 'targetDc'.
* Default: the DC that the local host used to connect to AD. The
input binds to its root Distinguished Name (DN).
startingNode = <string>
* Where in the Active Directory directory tree to start monitoring.
* The user that you configure the Splunk software to run as at
installation determines where the input starts monitoring.
* Default: the root of the directory tree.
monitorSubtree = [0|1]
* Whether or not to monitor the subtree(s) of a given Active
Directory tree path.
* Set this to 1 to monitor subtrees of a given directory tree
path and 0 to monitor only the path itself.
* Default: 1 (monitor subtrees of a given directory tree path).
disabled = [0|1]
* Whether or not the input is enabled.
* Set this to 1 to disable the input and 0 to enable it.
* Default: 0 (enabled.)
index = <string>
* The index to store incoming data into for this input.
* This setting is optional.
* Default: the default index.
printSchema = [0|1]
458
* Whether or not to print the Active Directory schema.
* Set this to 1 to print the schema and 0 to not print
the schema.
* Default: 1 (print the Active Directory schema).
baseline = [0|1]
* Whether or not to query baseline objects.
* Baseline objects are objects which currently reside in Active
Directory.
* Baseline objects also include previously deleted objects.
* Set this to 1 to query baseline objects, and 0 to not query
baseline objects.
* Default: 0 (do not query baseline objects).
[WinRegMon://<name>]
proc = <string>
* The processes this input should monitor for Registry access.
* If set, matches against the process name which performed the Registry
access.
* The input includes events from processes that match the regular
expression
that you specify here.
* The input filters out events for processes that do not match the
regular expression.
* No default.
hive = <string>
* The Registry hive(s) that this input should monitor for Registry
access.
* If set, matches against the Registry key that was accessed.
459
* The input includes events from Registry hives that match the
regular expression that you specify here.
* The input filters out events for Registry hives that do not match the
regular expression.
* No default.
type = <string>
* A regular expression that specifies the type(s) of Registry event(s)
that you want the input to monitor.
* No default.
baseline = [0|1]
* Whether or not the input should get a baseline of Registry events
when it starts.
* If you set this to 1, the input captures a baseline for
the specified hive when it starts for the first time. It then
monitors live events.
* Default: 0 (do not capture a baseline for the specified hive
first before monitoring live events).
baseline_interval = <integer>
* Selects how much downtime in continuous registry monitoring should
trigger
a new baseline for the monitored hive and/or key.
* In detail:
* Sets the minimum time interval, in seconds, between baselines.
* At startup, a WinRegMon input will not generate a baseline if less
time
has passed since the last checkpoint than baseline_interval
chooses.
* In normal operation, checkpoints are updated frequently as data is
acquired, so this will cause baselines to occur only when monitoring
was
not operating for a period of time.
* If baseline is set to 0 (disabled), has no effect.
* Default: 0 (always baseline on startup, if baseline is 1)
disabled = [0|1]
* Whether or not the input is enabled.
* Set this to 1 to disable the input, or 0 to enable it.
* Default: 0 (enabled).
index = <string>
* The index that this input should send the data to.
* This setting is optional.
* Default: the default index.
460
Windows Host Monitoring
[WinHostMon://<name>]
interval = <integer>
* The interval, in seconds, between when the input runs to gather
Windows host information and generate events.
* See 'interval' in the Scripted input section for more information.
disabled = [0|1]
* Whether or not the input is enabled.
* Set this to 1 to disable the input, or 0 to enable it.
* Default: 0 (enabled).
index = <string>
* The index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinPrintMon://<name>]
461
* Each WinPrintMon:// stanza represents an WinPrintMon monitoring input.
The value of "<name>" matches what was specified in Splunk Web.
* NOTE: The WinPrintMon input is for local Windows systems only.
* The "<name>" component of the stanza name will be used as the source
field
on generated events, unless an explicit source setting is added to the
stanza. It does not affect what data is collected (see type setting
for
that).
baseline = [0|1]
* Whether or not to capture a baseline of print objects when the
input starts for the first time.
* If you set this setting to 1, the input captures a baseline of
the current print objects when the input starts for the first time.
* Default: 0 (do not capture a baseline.)
disabled = [0|1]
* Whether or not the input is enabled.
* Set to 1 to disable the input, or 0 to enable it.
* Default: 0 (enabled).
index = <string>
* The index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinNetMon://<name>]
462
match the regular expression.
* Default: Not set (including all remote address events).
addressFamily = ipv4;ipv6
* Determines the events to include by network address family.
* Setting ipv4 alone will include only IPv4 packets, while ipv6 alone
will include only IPv6 packets.
* To specify both families, separate them with a semicolon.
For example: ipv4;ipv6
* Default: Not set (including events with both address families).
packetType = connect;accept;transport.
* Determines the events to include by network packet type.
* To specify multiple packet types, separate them with a semicolon.
For example: connect;transport
* Default: Not set (including events with any packet type).
direction = inbound;outbound
* Determines the events to include by network transport direction.
* To specify multiple directions, separate them with a semicolon.
For example: inbound;outbound
* Default: Not set (including events with any direction).
protocol = tcp;udp
* Determines the events to include by network protocol.
* To specify multiple protocols, separate them with a semicolon.
For example: tcp;udp
* For more information about protocols, see
http://www.ietf.org/rfc/rfc1700.txt
* Default: Not set (including events with all protocols).
readInterval = <integer>
* How often, in milliseconds, that the input should read the network
kernel driver for events.
463
* Advanced option. Use the default value unless there is a problem
with input performance.
* Set this to adjust the frequency of calls into the network kernel
driver.
* Choosing lower values (higher frequencies) can reduce network
performance, while higher numbers (lower frequencies) can cause event
loss.
* The minimum allowed value is 10 and the maximum allowed value is 1000.
* Default: Not set, handled as 100 (ms).
driverBufferSize = <integer>
* The maximum number of packets that the network kernel driver retains
for retrieval by the input.
* Set to adjust the maximum number of network packets retained in
the network driver buffer.
* Advanced option. Use the default value unless there is a problem
with input performance.
* Configuring this setting to lower values can result in event loss,
while
higher values can increase the size of non-paged memory on the host.
* The minimum allowed value is 128 and the maximum allowed value is
32768.
* Default: Not set, handled as 32768 (packets).
userBufferSize = <integer>
* The maximum size, in megabytes, of the user mode event buffer.
* Controls amount of packets cached in the the user mode.
* Advanced option. Use the default value unless there is a problem
with input performance.
* Configuring this setting to lower values can result in event loss,
while
higher values can increase the amount of memory that the network
monitor uses.
* The minimum allowed value is 20 and the maximum allowed value is 500.
* Default: Not set, handled as 20MB.
mode = single|multikv
* Specifies how the network monitor input generates events.
* Set to "single" to generate one event per packet.
* Set to "multikv" to generate combined events of many packets in
multikv format (many packets described in a single table as one
event).
* Default: "single".
multikvMaxEventCount = <integer>
* The maximum number of packets to combine in multikv format when you
set
the 'mode' setting to "multikv".
* Has no effect when 'mode' is set to "single".
* Advanced option.
* The minimum allowed value is 10 and the maximum allowed value is 500.
* Default: 100.
464
multikvMaxTimeMs = <integer>
* The maximum amount of time, in milliseconds, to accumulate packet data
to
combine into a large tabular event in multikv format.
* Has no effect when 'mode' is set to 'single'.
* Advanced option.
* The minimum allowed value is 100 and the maximum allowed value is
5000.
* Default: 1000.
sid_cache_disabled = 0|1
* Enables or disables account Security IDentifier (SID) cache.
* This setting is global. It affects all Windows Network Monitor
stanzas.
* Default: 0.
sid_cache_exp = <integer>
* The expiration time, in seconds, for account SID cache entries.
* Optional.
* This setting is global. It affects all Windows Network Monitor
stanzas.
* The minimum allowed value is 10 and the maximum allowed value is
31536000.
* Default: 3600.
sid_cache_exp_neg = <integer>
* The expiration time, in seconds, for negative account SID cache
entries.
* Optional.
* This setting is global. It affects all Windows Network Monitor
stanzas.
* The minimum allowed value is 10 and the maximum allowed value is
31536000.
* Default: 10.
sid_cache_max_entries = <integer>
* The maximum number of account SID cache entries.
* Optional.
* This setting is global. It affects all Windows Network Monitor
stanzas.
* The minimum allowed value is 10 and the maximum allowed value is
40000.
* Default: 10.
disabled = 0|1
* Whether or not the input is enabled.
* Set to 1 to disable the input, and 0 to enable it.
* Default: 0 (enabled.)
index = <string>
* The index that this input should send the data to.
465
* Optional.
* Default: the default index.
[powershell://<name>]
* Runs Windows PowerShell version 3 commands or scripts.
script = <string>
* A PowerShell command-line script or .ps1 script file that the input
should run.
* No default.
[powershell2://<name>]
* Runs Windows PowerShell version 2 commands or scripts.
script = <string>
* A PowerShell command-line script or .ps1 script file that the input
should run.
* No default.
schedule = <schedule>
* How often to run the specified PowerShell command or script.
* You can provide a valid cron schedule.
* Default: runs the command or script once, at startup.
[remote_queue:<name>]
remote_queue.* = <string>
* With remote queues, communication between the indexer and the remote
queue
system might require additional configuration, specific to the type of
remote
queue.
* You can pass configuration information to the storage system by
specifying the settings through the following schema:
remote_queue.<scheme>.<config-variable> = <value>. For example:
466
remote_queue.sqs.access_key = ACCESS_KEY
* This setting is optional.
* No default.
remote_queue.type = [sqs|kinesis]
* Currently not supported. This setting is related to a feature that is
still under development.
* Required.
* Specifies the remote queue type, either Amazon Web Services (AWS)
Simple Queue Service (SQS) or Amazon Kinesis.
compressed = <boolean>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
channelReapInterval = <integer>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
channelTTL = <integer>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
channelReapLowater = <integer>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
remote_queue.sqs.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The access key to use when authenticating with the remote queue
system supporting the SQS API.
* If not specified, the indexer will look for these environment
variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the
environment
variables are not set and the indexer is running on Elastic Compute
Cloud
(EC2), the indexer attempts to use the secret key from the Identity
and
Access Management (IAM) role.
* This setting is optional.
* Default: not set.
467
remote_queue.sqs.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The secret key to use when authenticating with the remote queue
system supporting the SQS API.
* If not specified, the indexer will look for these environment
variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the
environment
variables are not set and the indexer is running on EC2, the indexer
attempts to use the secret key from the IAM role.
* This setting is optional.
* Default: not set.
remote_queue.sqs.auth_region = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The authentication region to use when signing the requests when
interacting
with the remote queue system supporting the SQS API.
* If not specified and the indexer is running on EC2, the auth_region is
constructed automatically based on the EC2 region of the instance
where the
the indexer is running.
* This setting is optional.
* Default: not set.
remote_queue.sqs.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote queue system supporting the SQS API.
* The scheme, http or https, can be used to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://sqs.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified in 'remote_queue.sqs.auth_region' or a value
constructed automatically based on the EC2 region of the running
instance.
* Example: https://sqs.us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
468
* A value of 0 means unlimited.
* Default: 8
remote_queue.sqs.message_group_id = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The Message Group ID for Amazon Web Services Simple Queue Service
(SQS) First-In, First-Out (FIFO) queues.
* Setting a Message Group ID controls how messages within an AWS SQS
queue are
processed.
* For information on SQS FIFO queues and how messages in those queues
are
processed, see "Recommendations for FIFO queues" in the AWS SQS
Developer
Guide.
* If you configure this setting, Splunk software assumes that the SQS
queue is
a FIFO queue, and that messages in the queue should be processed
first-in,
first-out.
* Otherwise, Splunk software assumes that the SQS queue is a standard
queue.
* Can be between 1-128 alphanumeric or punctuation characters.
* NOTE: FIFO queues must have Content-Based Deduplication enabled.
* This setting is optional.
* Default: not set.
remote_queue.sqs.retry_policy = [max_count|none]
* Currently not supported. This setting is related to a feature that is
still
under development.
* The retry policy to use for remote queue operations.
* A retry policy specifies whether and how to retry file operations
that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a queue operation can
be
retried upon intermittent failure.
+ "none": Do not retry file operations upon failure.
* This setting is optional.
* Default: "max_count"
469
* This setting is optional.
* Default: 9
470
* Default: 60
remote_queue.sqs.large_message_store.endpoint = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified via 'remote_queue.sqs.auth_region' or a
value
constructed automatically based on the EC2 region of the running
instance.
* Example: https://s3-us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.sqs.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The remote storage location where messages that are larger than the
underlying queue maximum message size will reside.
* The format for this attribute is:
<scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific
471
string for
identifying a location inside the storage system.
* These external systems are supported:
- Object stores that support the AWS S3 protocol. These use the
scheme "s3".
For example, "path=s3://mybucket/some/path".
* If not specified, messages exceeding the underlying queue's maximum
message
size are dropped.
* This setting is optinoal.
* Default: not set.
remote_queue.kinesis.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the access key to use when authenticating with the remote
queue
system supporting the Kinesis API.
* If not specified, the forwarder will look for these environment
variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the
environment
variables are not set and the forwarder is running on EC2, the
forwarder
attempts to use the secret key from the IAM role.
* This setting is optional.
* Default: not set.
remote_queue.kinesis.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the secret key to use when authenticating with the remote
queue
system supporting the Kinesis API.
* If not specified, the forwarder will look for these environment
variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the
environment
variables are not set and the forwarder is running on EC2, the
forwarder
attempts to use the secret key from the IAM role.
* This setting is optional.
* Default: not set.
remote_queue.kinesis.auth_region = <string>
* Currently not supported. This setting is related to a feature that is
472
still under development.
* The authentication region to use when signing the requests when
interacting
with the remote queue system supporting the Kinesis API.
* If not specified and the forwarder is running on EC2, the auth_region
will be
constructed automatically based on the EC2 region of the instance
where the
the forwarder is running.
* This setting is optional.
* Default: not set.
remote_queue.kinesis.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote queue system supporting the Kinesis API.
* The scheme, http or https, can be used to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://kinesis.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified via 'remote_queue.kinesis.auth_region' or a
value
constructed automatically based on the EC2 region of the running
instance.
* Example: https://kinesis.us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.kinesis.retry_policy = [max_count|none]
* The retry policy to use for remote queue operations.
* A retry policy specifies whether and how to retry file operations
that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a queue operation
will be
retried upon intermittent failure.
+ "none": Do not retry file operations upon failure.
* This setting is optional.
* Default: "max_count"
remote_queue.kinesis.max_count.max_retries_per_part = <unsigned
integer>
* When 'remote_queue.kinesis.retry_policy' is "max_count", sets the
maximum number of times a queue operation is retried upon intermittent
failure.
* This setting is optional.
* Default: 9
473
remote_queue.kinesis.timeout.connect = <unsigned integer>
* Currently not supported. This setting is related to a feature that is
still under development.
* The connection timeout, in milliseconds, when interacting with
Kinesis for this queue.
* This setting is optional.
* Default: 5000
474
* Default: 100000
remote_queue.kinesis.large_message_store.endpoint = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint will be constructed automatically based
on the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified via 'remote_queue.kinesis.auth_region' or a
value
constructed automatically based on the EC2 region of the running
instance.
* Example: https://s3-us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.kinesis.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The remote storage location where messages larger than the
underlying queue maximum message size will reside.
* The format for this attribute is:
<scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific
string for
identifying a location inside the storage system.
* These external systems are supported:
- Object stores that support AWS's S3 protocol. These use the scheme
"s3".
For example, "path=s3://mybucket/some/path".
* If not specified, messages exceeding the underlying queue maximum
message
size are dropped.
* This setting is optional.
* Default: not set.
475
inputs.conf.example
# Version 7.2.1
#
# This is an example inputs.conf. Use this file to configure data
inputs.
#
# To use one or more of these configurations, copy the configuration
block into
# inputs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[monitor:///var/log]
[monitor:///var/log/httpd]
sourcetype = access_common
ignoreOlderThan = 7d
[monitor:///mnt/logs]
host_segment = 3
476
the
# data is set to the IP address of the remote server.
[tcp://:9997]
[tcp://:9995]
connection_host = dns
sourcetype = log4j
source = tcp:9995
[tcp://10.1.1.10:9995]
host = webhead-1
sourcetype = access_common
source = //10.1.1.10/var/log/apache/access.log
[splunktcp://:9996]
connection_host = dns
[splunktcp://10.1.1.100:9996]
477
# and the host is set to the host name of the remote server.
[tcp://syslog.corp.company.net:514]
sourcetype = syslog
connection_host = dns
[splunktcptoken://tok1]
token = $7$ifQTPTzHD/BA8VgKvVcgO1KQAtr3N1C8S/1uK3nAKIE9dd9e9g==
[SSL]
serverCert=$SPLUNK_HOME/etc/auth/server.pem
password=password
rootCA=$SPLUNK_HOME/etc/auth/cacert.pem
requireClientCert=false
[splunktcp-ssl:9996]
[fschange:/etc/]
fullEvent=true
pollPeriod=60
recurse=true
sendEventMaxSize=100000
index=main
# Monitor the Security Windows Event Log channel, getting the most
recent
# events first, then older, and finally continuing to gather newly
arriving events
[WinEventLog://Security]
disabled = 0
start_from = newest
evt_dc_name =
evt_dns_name =
evt_resolve_ad_ds =
evt_resolve_ad_obj = 1
checkpointInterval = 5
478
[WinEventLog://ForwardedEvents]
disabled = 0
start_from = oldest
current_only = 1
batch_size = 10
checkpointInterval = 5
[tcp://9994]
queueSize=50KB
persistentQueueSize=100MB
# These stanzas gather performance data from the local system only.
# Use wmi.conf for performance monitor metrics on remote systems.
# Query the PhysicalDisk performance object and gather disk access data
for
# all physical drives installed in the system. Store this data in the
# "perfmon" index.
# Note: If the interval attribute is set to 0, Splunk will reset the
interval
# to 1.
[perfmon://LocalPhysicalDisk]
interval = 0
object = PhysicalDisk
counters = Disk Bytes/sec; % Disk Read Time; % Disk Write Time; % Disk
Time
instances = *
disabled = 0
index = PerfMon
[perfmon://LocalMainMemory]
interval = 5
object = Memory
counters = Committed Bytes; Available Bytes; % Committed Bytes In Use
disabled = 0
479
index = main
# Gather data on USB activity levels every 10 seconds. Store this data
in the
# default index.
[perfmon://USBChanges]
interval = 10
object = USB
counters = Usb Control Data Bytes/Sec
instances = *
disabled = 0
# Monitor the default domain controller (DC) for the domain that the
computer
# running Splunk belongs to. Start monitoring at the root node of Active
# Directory.
[admon://NearestDC]
targetDc =
startingNode =
# Monitor a specific DC, with a specific starting node. Store the events
in
# the "admon" Splunk index. Do not print Active Directory schema. Do not
# index baseline events.
[admon://DefaultTargetDC]
targetDc = pri01.eng.ad.splunk.com
startingNode = OU=Computers,DC=eng,DC=ad,DC=splunk,DC=com
index = admon
printSchema = 0
baseline = 0
[admon://SecondTargetDC]
targetDc = pri02.eng.ad.splunk.com
startingNode = OU=Computers,DC=hr,DC=ad,DC=splunk,DC=com
instance.cfg.conf
The following are the spec and example files for instance.cfg.conf.
480
instance.cfg.conf.spec
# Version 7.2.1
#
# This file contains the set of attributes and values you can expect to
find in
# the SPLUNK_HOME/etc/instance.cfg file; the instance.cfg file is not
to be
# modified or removed by user. LEAVE THE instance.cfg FILE ALONE.
#
GLOBAL SETTINGS
[general]
* Splunk expects that every Splunk instance will have a unique string
for this
value, independent of all other Splunk instances. By default, Splunk
will
arrange for this without user intervention.
481
* If server.conf has a value of 'guid' AND instance.cfg has a value of
'guid'
AND these values are different, startup halts and error is shown.
Operator
must resolve this error. We recommend erasing the value from
server.conf
file, and then restarting.
instance.cfg.conf.example
# Version 7.2.1
#
# This file contains an example SPLUNK_HOME/etc/instance.cfg file; the
# instance.cfg file is not to be modified or removed by user. LEAVE THE
# instance.cfg FILE ALONE.
#
[general]
guid = B58A86D9-DF3D-4BF8-A426-DB85C231B699
limits.conf
The following are the spec and example files for limits.conf.
limits.conf.spec
# Version 7.2.1
#
482
OVERVIEW
# This file contains descriptions of the settings that you can use to
# configure limitations for the search commands.
#
# Each stanza controls different search commands settings.
#
# There is a limits.conf file in the $SPLUNK_HOME/etc/system/default/
directory.
# Never change or copy the configuration files in the default
directory.
# The files in the default directory must remain intact and in their
original
# location.
#
# To set custom configurations, create a new file with the name
limits.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific
settings
# that you want to customize to the local configuration file.
# For examples, see limits.conf.example. You must restart the Splunk
instance
# to enable configuration changes.
#
# To learn more about configuration files (including file precedence)
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# About Distributed Search
# Unlike most settings which affect searches, limits.conf settings are
not
# provided by the search head to be used by the search peers. This
means
# that if you need to alter search-affecting limits in a distributed
# environment, typically you will need to modify these settings on the
# relevant peers and search head for consistent results.
#
GLOBAL SETTINGS
483
are
# multiple default stanzas, settings are combined. In the case of
# multiple definitions of the same setting, the last definition in
the
# file takes precedence.
# * If a setting is defined at both the global level and in a specific
# stanza, the value in the specific stanza takes precedence.
#
# CAUTION: Do not alter the settings in the limits.conf file unless you
know
# what you are doing. Improperly configured limits might result in
[default]
DelayArchiveProcessorShutdown = <bool>
* Specifies whether during splunk shutdown archive processor should
finish
processing archive file under process.
* When set to ?false?: The archive processor abandons further processing
of
the archive file and will process again from start again.
* When set to ?true?: The archive processor will complete processing of
the archive file. Shutdown will be delayed.
* Default: false
484
search for all searches run on this system.
* When set to ?0?: Specifies that the size is unbounded. Searches might
be
allowed to grow to arbitrary sizes.
* NOTE:
* The mvexpand command uses the ?max_mem_usage_mb? value in a
different way.
* The mvexpand command has no combined logic with ?maxresults?.
* If the memory limit is exceeded, output is truncated, not spilled
to disk.
* The stats command processor uses the ?max_mem_usage_mb? value in
the following way.
* If the estimated memory usage exceeds the specified limit, the
results are
spilled to disk.
* If 0 is specified, the results are spilled to the disk when the
number of
results exceed the ?maxresultrows? setting.
* The eventstats command processor uses the ?max_mem_usage_mb? value
in the
following way.
* Both the ?max_mem_usage_mb? and the ?maxresultrows? settings are
used to determine
the maximum number of results to return. If the limit for one
setting is reached,
the eventstats processor continues to return results until the
limit for the
other setting is reached. When both limits are reached, the
eventstats command
processor stops adding the requested fields to the search
results.
* If you set ?max_mem_usage_mb? to 0, the eventstats command
processor uses
only the ?maxresultsrows? setting as the threshold. When the
number of
results exceeds the ?maxresultsrows? setting, the eventstats
command processor
stops adding the requested fields to the search results.
* Default: 200
min_batch_size_bytes = <integer>
* Specifies the size, in bytes, of the file/tar after which the
file is handled by the batch reader instead of the trailing processor.
* Global parameter, cannot be configured per input.
* NOTE: Configuring this to a very small value could lead to backing up
of jobs
at the tailing processor.
* Default: 20,971,520 bytes
regex_cpu_profiling = <bool>
* Enable CPU time metrics for RegexProcessor. Output will be in the
metrics.log file.
485
Entries in metrics.log will appear per_host_regex_cpu,
per_source_regex_cpu,
per_sourcetype_regex_cpu, per_index_regex_cpu.
* Default: false
[searchresults]
compression_level = <integer>
* Compression level to use when writing search results to .csv.gz files.
* Default: 1
maxresultrows = <integer>
* Configures the maximum number of events are generated by search
commands
which grow the size of your result set (such as multikv) or that
create
events. Other search commands are explicitly controlled in specific
stanzas
below.
* This limit should not exceed 50000.
* Default: 50000
tocsv_maxretry = <integer>
* Maximum number of times to retry the atomic write operation.
* When set to ?1?: Specifies that there will be no retries.
* Default: 5
tocsv_retryperiod_ms = <integer>
* Period of time to wait before each retry.
* Default: 500
[search_info]
filteredindexes_log_level = [DEBUG|INFO|WARN|ERROR]
* Log level of messages when search returns no results because
486
user has no permissions to search on queried indexes.
infocsv_log_level = [DEBUG|INFO|WARN|ERROR]
* Limits the messages which are added to the info.csv file to the
stated
level and above.
* For example, if ?infocsv_log_level? is WARN, messages of type WARN
and higher will be added to the info.csv file.
show_warn_on_filtered_indexes = <boolean>
* Log warnings if search returns no results because user has
no permissions to search on queried indexes.
[subsearch]
maxout = <integer>
* Maximum number of results to return from a subsearch.
* This value cannot be greater than or equal to 10500.
* Default: 10000
maxtime = <integer>
* Maximum number of seconds to run a subsearch before finalizing
* Default: 60
ttl = <integer>
* The time to live (ttl), in seconds, of the cache for the results of a
given
subsearch.
* Do not set this below 120 seconds.
* See the definition in the [search] stanza under the ?TTL? section for
more
details on how the ttl is computed.
* Default: 300 (5 minutes)
487
SEARCH COMMAND
# This section contains the limitation settings for the search command.
# The settings are organized by type of setting.
[search]
############################################################################
# Batch search
############################################################################
# This section contains settings for batch search.
allow_batch_mode = <bool>
* Specifies whether or not to allow the use of batch mode which searches
in disk based batches in a time insensitive manner.
* In distributed search environments, this setting is used on the search
head.
* Default: true
batch_search_max_index_values = <int>
* When using batch mode, this limits the number of event entries read
from the
index file. These entries are small, approximately 72 bytes. However
batch
mode is more efficient when it can read more entries at one time.
* Setting this value to a smaller number can lead to slower search
performance.
* A balance needs to be struck between more efficient searching in batch
mode
* and running out of memory on the system with concurrently running
searches.
* Default: 10000000
batch_search_max_pipeline = <int>
* Controls the number of search pipelines that are
launched at the indexer during batch search.
* Increasing the number of search pipelines should help improve search
performance, however there will be an increase in thread and memory
usage.
* This setting applies only to searches that run on remote indexers.
* Default: 1
batch_search_max_results_aggregator_queue_size = <int>
* Controls the size, in MB, of the search results queue to which all
the search pipelines dump the processed search results.
488
* Increasing the size can lead to search performance gains.
Decreasing the size can reduce search performance.
* Do not specify zero for this setting.
* Default: 100
batch_search_max_serialized_results_queue_size = <int>
* Controls the size, in MB, of the serialized results queue from which
the serialized search results are transmitted.
* Increasing the size can lead to search performance gains.
Decreasing the size can reduce search performance.
* Do not specify zero for this setting.
* Default: 100
batch_retry_min_interval = <int>
* When batch mode attempts to retry the search on a peer that failed,
specifies the minimum time, in seconds, to wait to retry the search.
* Default: 5
batch_retry_max_interval = <int>
* When batch mode attempts to retry the search on a peer that failed,
specifies the maximum time, in seconds, to wait to retry the search.
* Default: 300 (5 minutes)
batch_retry_scaling = <double>
* After a batch retry attempt fails, uses this scaling factor to
increase
the time to wait before trying the search again.
* The value should be > 1.0.
* Default: 1.5
############################################################################
# Bundles
############################################################################
# This section contains settings for bundles and bundle replication.
load_remote_bundles = <bool>
* On a search peer, allow remote (search head) bundles to be loaded in
splunkd.
* Default: false.
replication_file_ttl = <int>
* The time to live (ttl), in seconds, of bundle replication tarballs,
for example: *.bundle files.
* Default: 600 (10 minutes)
489
replication_period_sec = <int>
* The minimum amount of time, in seconds, between two successive bundle
replications.
* Default: 60
sync_bundle_replication = [0|1|auto]
* A flag that indicates whether configuration file replication blocks
searches or is run asynchronously.
* When set to ?auto?: The Splunk software uses asynchronous
replication only if all of the peers support asynchronous bundle
replication.
Otherwise synchronous replication is used.
* Default: auto
############################################################################
# Concurrency
############################################################################
# This section contains settings for search concurrency limits.
base_max_searches = <int>
* A constant to add to the maximum number of searches, computed as a
multiplier of the CPUs.
* Default: 6
max_searches_per_cpu = <int>
* The maximum number of concurrent historical searches for each CPU.
The system-wide limit of historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus +
base_max_searches
* NOTE: The maximum number of real-time searches is computed as:
max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Default: 1
############################################################################
# Distributed search
############################################################################
# This section contains settings for distributed search connection
# information.
490
is
greater than ?addpeer_skew_limit?, the search peer is not added.
* This is only relevant to manually added peers. This setting has no
effect
on index cluster search peers.
* Default: 600 (10 minutes)
fetch_remote_search_log = [enabled|disabledSavedSearches|disabled]
* When set to ?enabled?: All remote search logs are downloaded barring
the oneshot search.
* When set to ?disabledSavedSearches?: Downloads all remote logs other
than saved search logs and oneshot search logs.
* When set to ?disabled?: Irrespective of the search type, all remote
search log download functionality is disabled.
* NOTE:
* The previous values:[true|false] are still supported but not
recommended.
* The previous value of ?true? maps to the current value of
?enabled?.
* The previous value of ?false? maps to the current value of
?disabled?.
* Default: disabledSavedSearches
max_chunk_queue_size = <int>
* The maximum size of the chunk queue.
* default: 10000000
max_combiner_memevents = <int>
* Maximum size of the in-memory buffer for the search results combiner.
The <int> is the number of events.
* Default: 50000
max_workers_searchparser = <int>
* The number of worker threads in processing search result when using
round
robin policy.
* default: 5
results_queue_min_size = <integer>
* The minimum size, of search result chunks, that will be kept from
peers
for processing on the search head before throttling the rate that data
491
is accepted.
* The minimum queue size in chunks is the ?results_queue_min_size?
value
and the number of peers providing results, which ever is greater.
* Default: 10
result_queue_max_size = <integer>
* The maximum size, in MB, that will be kept from peers for processing
on
the search head before throttling the rate that data is accepted.
* The ?results_queue_min_size? value takes precedence. The number of
search
results chunks specified by ?results_queue_min_size? will always be
retained in the queue even if the combined size in MB exceeds the
?result_queue_max_size? value.
* Default: 100
results_queue_read_timeout_sec = <integer>
* The amount of time, in seconds, to wait when the search executing on
the
search head has not received new results from any of the peers.
* Cannot be less than the 'receiveTimeout' setting in the
distsearch.conf
file.
* Default: 900
batch_wait_after_end = <int>
* DEPRECATED: Use the 'results_queue_read_timeout_sec' setting instead.
############################################################################
# Field stats
############################################################################
# This section contains settings for field statistics.
fieldstats_update_freq = <number>
* How often to update the field summary statistics, as a ratio to the
elapsed
run time so far.
* Smaller values means update more frequently.
* When set to ?0?: Specifies to update as frequently as possible.
* Default: 0
fieldstats_update_maxperiod = <number>
* The maximum period, in seconds, for updating field summary statistics.
* When set to ?0?: Specifies that there is not maximum period. The
period
is dictated by the calculation:
current_run_time x fieldstats_update_freq
* Fractional seconds are allowed.
* Default: 60
min_freq = <number>
492
* Minimum frequency of a field that is required for the field to be
included
in the /summary endpoint.
* The frequency must be a fraction >=0 and <=1.
* Default: 0.01 (1%)
############################################################################
# History
############################################################################
# This section contains settings for search history.
enable_history = <bool>
* Specifies whether to keep a history of the searches that are run.
* Default: true
max_history_length = <int>
* Maximum number of searches to store in history for each user and
application.
* Default: 1000
############################################################################
# Memory tracker
############################################################################
# This section contains settings for the memory tracker.
enable_memory_tracker = <bool>
* Specifies if the memory tracker is enabled.
* When set to ?false? (disabled): The search is not terminated even if
the search exceeds the memory limit.
* When set to ?true?: Enables the memory tracker.
* Must be set to ?true? to enable the
?search_process_memory_usage_threshold?
setting or the ?search_process_memory_usage_percentage_threshold?
setting.
* Default: false
search_process_memory_usage_threshold = <double>
* To use this setting, the ?enable_memory_tracker? setting must be set
to ?true?.
* Specifies the maximum memory, in MB, that the search process can
consume
in RAM.
* Search processes that violate the threshold are terminated.
* If the value is set to 0, then search processes are allowed to grow
unbounded in terms of in memory usage.
* Default: 4000 (4GB)
search_process_memory_usage_percentage_threshold = <float>
* To use this setting, the ?enable_memory_tracker? setting must be set
to ?true?.
* Specifies the percent of the total memory that the search process is
entitled to consume.
493
* Search processes that violate the threshold percentage are terminated.
* If the value is set to zero, then splunk search processes are allowed
to
grow unbounded in terms of percentage memory usage.
* Any setting larger than 100 or less than 0 is discarded and the
default
value is used.
* Default: 25%
############################################################################
# Meta search
############################################################################
# This section contains settings for meta search.
allow_inexact_metasearch = <bool>
* Specifies if a metasearch that is inexact be allowed.
* When set to ?true?: An INFO message is added to the inexact
metasearches.
* When set to ?false?: A fatal exception occurs at search parsing time.
* Default: false
indexed_as_exact_metasearch = <bool>
* Specifies if a metasearch can process <field>=<value> the same as
<field>::<value>, if <field> is an indexed field.
* When set to ?true?: Allows a larger set of metasearches when the
?allow_inexact_metasearch? setting is ?false?. However, some of the
metasearches might be inconsistent with the results of doing a normal
search.
* Default: false
############################################################################
# Misc
############################################################################
# This section contains miscellaneous search settings.
disk_usage_update_period = <number>
* Specifies how frequently, in seconds, should the search process
estimate the
artifact disk usage.
* The quota for the amount of disk space that a search job can use is
controlled by the 'srchDiskQuota' setting in the authorize.conf file.
* Exceeding this quota causes the search to be auto-finalized
immediately,
even if there are results that have not yet been returned.
* Fractional seconds are allowed.
* Default: 10
dispatch_dir_warning_size = <int>
* Specifies the number of jobs in the dispatch directory that triggers
when
to issue a bulletin message. The message warns that performance might
be impacted.
494
* Default: 5000
do_not_use_summaries = <bool>
* Do not use this setting without working in tandem with Splunk
support.
* This setting is a very narrow subset of ?summary_mode=none?.
* When set to ?true?: Disables some functionality that is necessary for
report acceleration.
* In particular, when set to ?true?, search processes will no longer
query
the main splunkd's /admin/summarization endpoint for report
acceleration
summary IDs.
* In certain narrow use-cases this might improve performance if report
acceleration (savedsearches.conf:auto_summarize) is not in use, by
lowering
the main splunkd's process overhead.
* Default: false
enable_datamodel_meval = <bool>
* Enable concatenation of successively occurring evals into a single
comma-separated eval during the generation of datamodel searches.
* default: true
force_saved_search_dispatch_as_user = <bool>
* Specifies whether to overwrite the ?dispatchAs? value.
* When set to ?true?: The ?dispatchAs? value is overwritten by ?user?
regardless of the [user|owner] value in the savedsearches.conf file.
* When set to ?false?: The value in the savedsearches.conf file is
used.
* You might want to set this to ?true? to effectively disable
?dispatchAs = owner? for the entire install, if that more closely
aligns
with security goals.
* Default: false
max_id_length = <integer>
* Maximum length of the custom search job ID when spawned by using
REST API argument ?id?.
search_keepalive_frequency = <int>
* Specifies how often, in milliseconds, a keepalive is sent while a
search is running.
* Default: 30000 (30 seconds)
search_keepalive_max = <int>
* The maximum number of uninterupted keepalives before the connection is
closed.
* This counter is reset if the search returns results.
* Default: 100
search_retry = <bool>
495
* Specifies whether the Splunk software retries parts of a search within
a
currently-running search processs when there are indexer failures in
the
indexer clustering environment.
* Indexers can fail during rolling restart or indexer upgrade when
indexer
clustering is enabled. Indexer reboots can also result in failures.
* This setting applies only to historical search in batch mode,
real-time
search, and indexed real-time search.
* When set to true, the Splunk software attempts to rerun searches on
indexer
cluster nodes that go down and come back up again. The search process
on the
search head maintains state information about the indexers and
buckets.
* NOTE: Search retry is on a best-effort basis, and it is possible
for Splunk software to return partial results for searches
without warning when you enable this setting.
* When set to false, the search process will stop returning results
from a specific
indexer when that indexer undergoes a failure.
* Default: false
stack_size = <int>
* The stack size, in bytes, of the thread that executes the search.
* Default: 4194304 (4MB)
summary_mode = [all|only|none]
* Specifies if precomputed summary data are to be used.
* When set to ?all?: Use summary data if possible, otherwise use raw
data.
* When set to ?only?: Use summary data if possible, otherwise do not use
any data.
* When set to ?none?: Never use precomputed summary data.
* Default: all
track_indextime_range = <bool>
* Specifies if the system should track the _indextime range of returned
search results.
* Default: true
use_bloomfilter = <bool>
* Controls whether to use bloom filters to rule out buckets.
* Default: true
use_metadata_elimination = <bool>
* Control whether to use metadata to rule out buckets.
* Default: true
results_serial_format = [csv|srs]
496
* The internal format used for storing serialized results on disk.
* Options:
* csv: Comma-separated values format
* srs: Splunk binary format
* Default: srs
* NOTE: Do not change unless instructed to do so by Splunk Support.
results_compression_algorithm = [gzip|none]
* The compression algorithm used for storing serialized results on disk.
* Options:
* gzip: gzip
* none: No compression
* Default: gzip
* NOTE: Do not change unless instructed to do so by Splunk Support.
use_dispatchtmp_dir = <bool>
* DEPRECATED. This setting has been deprecated and has no effect.
auto_cancel_after_pause = <integer>
* Specifies the amount of time, in seconds, that a search must be paused
before
the search is automatically cancelled.
* If set to 0, a paused search is never automatically cancelled.
* Default: 0
always_include_indexedfield_lispy = <bool>
* Controls if we should always search for a field that does not have
INDEXED=true set in fields.conf using both the indexed and
non-indexed forms
* If true, when searching for <field>=<val>, we search the lexicon for
both
<field>::<val> and <val>
* If false, when searching for <field>=<val>, we search the lexicon for
only
<val>
* Set to true if you have fields that are sometimes indexed and
sometimes not indexed. For field name that are always indexed, it is
much better performance wise to set INDEXED=true in fields.conf for that
field instead.
* Default: false
############################################################################
# Parsing
############################################################################
# This section contains settings related to parsing searches.
max_macro_depth = <int>
* Maximum recursion depth for macros. Specifies the maximum levels for
macro
expansion.
* It is considered a search exception if macro expansion does not stop
after
497
this many levels.
* Value must be greater than or equal to 1.
* Default: 100
max_subsearch_depth = <int>
* Maximum recursion depth for subsearches. Specifies the maximum levels
for
subsearches.
* It is considered a search exception if a subsearch does not stop after
this many levels.
* Default: 8
min_prefix_len = <integer>
* The minimum length of a prefix before a wildcard (*) to use in the
query
to the index.
* Default: 1
use_directives = <bool>
* Specifies whether a search can take directives and interpret them
into arguments.
* This is used in conjunction with the search optimizer in order to
improve search performance.
* Default: true
############################################################################
# Phased execution settings
############################################################################
# This section contains settings for multi-phased execution
phased_execution = <bool>
DEPRECATED This setting has been deprecated.
phased_execution_mode = [multithreaded|auto|singlethreaded]
* NOTE: Do not change this setting unless instructed to do so by Splunk
Support!
* Controls whether searches use the multiple-phase method of search
execution,
which is required for parallel reduce functionality as of Splunk
Enterprise
7.1.0.
* When set to 'multithreaded' the Splunk platform uses the
multiple-phase
search execution method. Allows usage of the 'redistribute' command.
* When set to 'auto', the Splunk platform uses the multiple-phase
search
execution method when the 'redistribute' command is used in the search
string. If the 'redistribute' command is not present in the search
string,
the single-phase search execution method is used.
* When set to 'singlethreaded' the Splunk platform uses the
single-threaded
498
search execution method, which does not allow usage of the
'redistribute'
command.
* Default: multithreaded
############################################################################
# Preview
############################################################################
# This section contains settings for previews.
max_preview_period = <integer>
* The maximum time, in seconds, between previews.
* Used with the preview interval that is calculated with the
?preview_duty_cycle? setting.
* When set to ?0?: Specifies unlimited time between previews.
* Default: 0
min_preview_period = <integer>
* The minimum time, in seconds, required between previews. When the
calculated
interval using ?preview_duty_cycle? indicates previews should be run
frequently. This setting is used to limit the frequency with which
previews
run.
* Default: 1
preview_duty_cycle = <number>
* The maximum time to spend generating previews, as a fraction of the
total
search time.
* Must be > 0.0 and < 1.0
* Default: 0.25
############################################################################
# Quota or queued searches
############################################################################
# This section contains settings for quota or queued searches.
default_allow_queue = [0|1]
* Unless otherwise specified by using a REST API argument, specifies if
an
asynchronous job spawning request should be queued on quota
violation.
499
If not, an http error of server too busy is returned.
* Default: 1 (true)
dispatch_quota_retry = <integer>
* The maximum number of times to retry to dispatch a search when the
quota has
been reached.
* Default: 4
dispatch_quota_sleep_ms = <integer>
* The time, in milliseconds, between retrying to dispatch a search when
a
quota is reached.
* Retries the given number of times, with each successive wait 2x longer
than
the previous wait time.
* Default: 100
enable_cumulative_quota = <bool>
* Specifies whether to enforce cumulative role based quotas.
* Default: false
queued_job_check_freq = <number>
* Frequency, in seconds, to check queued jobs to determine if the jobs
can
be started.
* Fractional seconds are allowed.
* Default: 1.
############################################################################
# Reading chunk controls
############################################################################
# This section contains settings for reading chunk controls.
chunk_multiplier = <integer>
* A multiplier that the ?max_results_perchunk?, ?min_results_perchunk?,
and
?target_time_perchunk? settings are multiplied by for a long running
search.
* Default: 5
long_search_threshold = <integer>
* The time, in seconds, until a search is considered "long running".
* Default: 2
max_rawsize_perchunk = <integer>
* The maximum raw size, in bytes, of results for each call to search
(in dispatch).
* When set to ?0?: Specifies that there is no size limit.
* This setting is not affected by the ?chunk_multiplier? setting.
* Default: 100000000 (100MB)
500
max_results_perchunk = <integer>
* The maximum number of results to emit for each call to the preview
data
generator.
* Default: 2500
max_results_perchunk = <integer>
* Maximum results for each call to search (in dispatch).
* Must be less than or equal to the ?maxresultrows? setting.
* Default: 2500
min_results_perchunk = <integer>
* The minimum results for each call to search (in dispatch).
* Must be less than or equal to the ?max_results_perchunk? setting.
* Default: 100
target_time_perchunk = <integer>
* The target duration, in milliseconds, of a particular call to fetch
search results.
* Default: 2000 (2 seconds)
############################################################################
# Real-time
############################################################################
# This section contains settings for real-time searches.
check_splunkd_period = <number>
* Amount of time, in seconds, that determines how frequently the search
process
(when running a real-time search) checks whether the parent process
(splunkd) is running or not.
* Fractional seconds are allowed.
* Default: 60 (1 minute)
realtime_buffer = <int>
* Maximum number of accessible events to keep for real-time searches in
Splunk Web.
* Acts as circular buffer after this buffer limit is reached.
* Must be greater than or equal to 1.
* Default: 10000
############################################################################
# Remote storage
############################################################################
# This section contains settings for remote storage.
bucket_localize_acquire_lock_timeout_sec = <int>
* The maximum amount of time, in seconds, to wait when attempting to
acquire a
lock for a localized bucket.
* When set to 0, waits indefinitely.
* This setting is only relevant when using remote storage.
501
* Default: 60 (1 minute)
bucket_localize_max_timeout_sec = <int>
* The maximum amount of time, in seconds, to spend localizing a bucket
stored
in remote storage.
* If the bucket contents (what is required for the search) cannot be
localized
in that timeframe, the bucket will not be searched.
* When set to ?0?: Specifies an unlimited amount of time.
* This setting is only relevant when using remote storage.
* Default: 300 (5 minutes)
bucket_localize_status_check_period_ms = <int>
* The amount of time, in milliseconds, between consecutive status
checks to see
if the needed bucket contents required by the search have been
localized.
* This setting is only relevant when using remote storage.
* The minimum and maximum values are 10 and 60000, respectively. If the
specified value falls outside this range, it is effectively set to the
nearest value within the range. For example, if you set the value to
70000, the effective value will be 60000.
* Default: 500 (.5 seconds)
bucket_localize_max_lookahead = <int>
* Specifies the maximum number of buckets the search command localizes
for look-ahead purposes, in addition to the required bucket.
* Increasing this value can improve performance, at the cost of
additional
network/io/disk utilization.
* Valid values are 0-64. Any value larger than 64 will be set to 64.
Other
invalid values will be discarded and the default will be substituted.
* This setting is only relevant when using remote storage.
* Default: 5
bucket_localize_lookahead_priority_ratio = <int>
* A value of N means that lookahead localizations will occur only 1 out
of N
search localizations, if any.
* Default: 5
bucket_predictor = [consec_not_needed|everything]
* Specifies which bucket file prediction algorithm to use.
* Do not change this unless you know what you are doing.
* Default: consec_not_needed
############################################################################
# Results storage
############################################################################
# This section contains settings for storing final search results.
502
max_count = <integer>
* The number of events that can be accessible in any given status bucket
(when status_buckets = 0).
* The last accessible event in a call that takes a base and count.
* Note: This value does not reflect the number of events displayed in
the
UI after the search is evaluated or computed.
* Default: 500000
max_events_per_bucket = <integer>
* For searches with ?status_buckets>0?, this setting limits the number
of
events retrieved for each timeline bucket.
* Default: 1000 in code.
status_buckets = <integer>
* The approximate maximum number buckets to generate and maintain in the
timeline.
* Default: 0, which means do not generate timeline information
truncate_report = [1|0]
* Specifies whether or not to apply the ?max_count? setting to report
output.
* Default: 0 (false)
write_multifile_results_out = <bool>
* At the end of the search, if results are in multiple files, write out
the
multiple files to the results_dir directory, under the search results
directory.
* This setting speeds up post-processing search, since the results will
already be split into appropriate size files.
* Default: true
############################################################################
# Search process
############################################################################
# This section contains settings for search process configurations.
idle_process_cache_search_count = <int>
* The number of searches that the search process must reach, before
purging
older data from the cache. The purge is performed even if the
?idle_process_cache_timeout" has not been reached.
* When a search process is allowed to run more than one search, the
search
process can cache some data between searches.
* When set to a negative value: No purge occurs, no matter how many
searches are run.
* Has no effect on Windows if ?search_process_mode? is not ?auto"
or if ?max_searches_per_process? is set to 0 or 1.
503
* Default: 8
idle_process_cache_timeout = <number>
* The amount of time, in seconds, that a search process must be idle
before
the system purges some older data from these caches.
* When a search process is allowed to run more than one search, the
search
process can cache some data between searches.
* When set to a negative value: No purge occurs, no matter on how long
the
search process is idle.
* When set to ?0?: Purging always occurs, regardless of whether the
process
has been idle or not.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: 0.5 (seconds)
idle_process_regex_cache_hiwater = <int>
* A threshold for the number of entries in the regex cache. If the regex
cache
grows to larger than this number of entries, the systems attempts to
purge some of the older entries.
* When a search process is allowed to run more than one search, the
search
process can cache compiled regex artifacts.
* Normally the "idle_process_cache_search count? and the
?idle_process_cache_timeout? settings will keep the regex cache a
reasonable size. This setting is to prevent the cache from growing
extremely large during a single large search.
* When set to a negative value: No purge occurs, not matter how large
the cache.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: 2500
idle_process_reaper_period = <number>
* The amount of time, in seconds, between checks to determine if there
are
too many idle search processes.
* When a search process is allowed to run more than one search, the
system
checks if there are too many idle search processes.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: 30
launcher_max_idle_checks = <int>
* Specifies the number of idle processes that are inspected before
giving up
and starting a new search process.
504
* When allowing more than one search to run for each process, the system
attempts to find an appropriate idle process to use.
* When set to a negative value: Every eligible idle process is
inspected.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: 5
launcher_threads = <int>
* The number of server thread to run to manage the search processes.
* Valid only when more than one search is allowed to run for each
process.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: -1 (a value is selected automatically)
max_old_bundle_idle_time = <number>
* The amount of time, in seconds, that a process bundle must be idle
before
the process bundle is considered for reaping.
* Used when reaping idle search processes and the process is not
configured
with the most recent configuration bundle.
* When set to a negative value: The idle processes are not reaped sooner
than normal if the processes are using an older configuration bundle.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: 5
max_searches_per_process = <int>
* On UNIX, specifies the maximum number of searches that each search
process
can run before exiting.
* After a search completes, the search process can wait for another
search to
start and the search process can be reused.
* When set to ?0? or ?1?: The process is never reused.
* When set to a negative value: There is no limit to the number of
searches
that a process can run.
* Has no effect on Windows if search_process_mode is not "auto?.
* Default: 500
max_time_per_process = <number>
* Specifies the maximum time, in seconds, that a process can spend
running
searches.
* When a search process is allowed to run more than one search, limits
how
much time a process can accumulate running searches before the process
must exit.
* When set to a negative value: There is no limit on the amount of time
505
a
search process can spend running.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* NOTE: A search can run longer than the value set for
?max_time_per_process?
without being terminated. This setting ONLY prevents the process from
being used to run additional searches after the maximum time is
reached.
* Default: 300 (5 minutes)
process_max_age = <number>
* Specifies the maximum age, in seconds, for a search process.
* When a search process is allowed to run more than one search, a
process
is not reused if the process is older than the value specified.
* When set to a negative value: There is no limit on the the age of the
search process.
* This setting includes the time that the process spends idle, which is
different than "max_time_per_process" setting.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* NOTE: A search can run longer than the the time set for
?process_max_age?
without being terminated. This setting ONLY prevents that process from
being used to run more searches after the search completes.
* Default: 7200 (120 minutes or 2 hours)
process_min_age_before_user_change = <number>
* The minimum age, in seconds, of an idle process before using a process
from a different user.
* When a search process is allowed to run more than one search, the
system
tries to reuse an idle process that last ran a search by the same
Splunk
user.
* If no such idle process exists, the system tries to use an idle
process
from a different user. The idle process from a different user must be
idle for at least the value specified for the
?process_min_age_before_user_change? setting.
* When set to ?0?: Any idle process by any Splunk user can be reused.
* When set to a negative value: Only a search process by same Splunk
user
can be reused.
* Has no effect on Windows if ?search_process_mode? is not "auto" or
if ?max_searches_per_process? is set to 0 or 1.
* Default: 4
506
* When set to ?traditional?: Each search process is initialized
completely
from scratch.
* When set to ?debug?: When set to a string beginning with "debug",
searches are routed through the <debugging-command>, where the user
can
"plug in" debugging tools.
* The <debugging-command> must reside in one of the following
locations:
* $SPLUNK_HOME/etc/system/bin/
* $SPLUNK_HOME/etc/apps/$YOUR_APP/bin/
* $SPLUNK_HOME/bin/scripts/
* The <debugging-args> are passed, followed by the search command it
would normally run, to <debugging-command>
* For example, given the following setting:
search_process_mode = debug
$SPLUNK_HOME/bin/scripts/search-debugger.sh 5
A command similar to the following is run:
$SPLUNK_HOME/bin/scripts/search-debugger.sh 5 splunkd search
--id=... --maxbuckets=... --ttl=... [...]
* Default: auto
############################################################################
# Search reuse
############################################################################
# This section contains settings for search reuse.
allow_reuse = <bool>
* Specifies whether to allow normally executed historical searches to be
implicitly re-used for newer requests if the newer request allows it.
* Default: true
reuse_map_maxsize = <int>
* Maximum number of jobs to store in the reuse map.
* Default: 1000
############################################################################
# Splunk Analytics for Hadoop
############################################################################
# This section contains settings for use with Splunk Analytics for
Hadoop.
reduce_duty_cycle = <number>
* The maximum time to spend performing the reduce, as a fraction of
total
search time.
* Must be > 0.0 and < 1.0.
* Default: 0.25
reduce_freq = <integer>
* When the specified number of chunks is reached, attempt to reduce
the intermediate results.
507
* When set to ?0?: Specifies that there is never an attempt to reduce
the
intermediate result.
* Default: 10
unified_search = <bool>
* Specifies if unified search is turned on for hunk archiving.
* Default: false
############################################################################
# Status
############################################################################
# This section contains settings for search status.
status_cache_size = <int>
* The number of status data for search jobs that splunkd can cache in
RAM.
This cache improves performance of the jobs endpoint.
* Default: 10000
status_period_ms = <int>
* The minimum amount of time, in milliseconds, between successive
status/info.csv file updates.
* This setting ensures that search does not spend significant time just
updating these files.
* This is typically important for very large number of search peers.
* It could also be important for extremely rapid responses from search
peers, when the search peers have very little work to do.
* Default: 1000 (1 second)
############################################################################
# Timelines
############################################################################
# This section contains settings for timelines.
remote_event_download_finalize_pool = <int>
* Size of the pool, in threads, responsible for writing out the full
remote events.
* Default: 5
remote_event_download_initialize_pool = <int>
* Size of the pool, in threads, responsible for initiating the remote
event fetch.
508
* Default: 5
remote_event_download_local_pool = <int>
* Size of the pool, in threads, responsible for reading full local
events.
* Default: 5
remote_timeline = [0|1]
* Specifies if the timeline can be computed remotely to enable better
map/reduce scalability.
* Default: 1 (true)
remote_timeline_connection_timeout = <int>
* Connection timeout, in seconds, for fetching events processed by
remote
peer timeliner.
* Default: 5.
remote_timeline_fetchall = [0|1]
* When set to ?1? (true): Splunk fetches all events accessible through
the
timeline from the remote peers before the job is considered done.
* Fetching of all events might delay the finalization of some
searches,
typically those running in verbose mode from the main Search view
in
Splunk Web.
* This potential performance impact can be mitigated by lowering the
?max_events_per_bucket? settings.
* When set to ?0? (false): The search peers might not ship all matching
events to the search head, particularly if there is a very large
number of them.
* Skipping the complete fetching of events back to the search head
will
result in prompt search finalization.
* Some events may not be available to browse in the UI.
* This setting does NOT affect the accuracy of search results computed
by
reporting searches.
* Default: 1 (true)
remote_timeline_max_count = <int>
* Maximum number of events to be stored per timeline bucket on each
search
peer.
* Default: 10000
remote_timeline_max_size_mb = <int>
* Maximum size of disk, in MB, that remote timeline events should take
on each peer.
* If the limit is reached, a DEBUG message is emitted and should be
visible in the job inspector or in messages.
509
* Default: 100
remote_timeline_min_peers = <int>
* Minimum number of search peers for enabling remote computation of
timelines.
* Default: 1
remote_timeline_parallel_fetch = <bool>
* Specifies whether to connect to multiple peers at the same time when
fetching remote events.
* Default: true
remote_timeline_prefetch = <int>
* Specifies the maximum number of full eventuate that each peer should
proactively send at the beginning.
* Default: 100
remote_timeline_receive_timeout = <int>
* Receive timeout, in seconds, for fetching events processed by remote
peer
timeliner.
* Default: 10
remote_timeline_send_timeout = <int>
* Send timeout, in seconds, for fetching events processed by remote peer
timeliner.
* Default: 10
remote_timeline_thread = [0|1]
* Specifies whether to use a separate thread to read the full events
from
remote peers if ?remote_timeline? is used and
?remote_timeline_fetchall?
is set to ?true?.
Has no effect if ?remote_timeline? or ?remote_timeline_fetchall? is
set to
?false?.
* Default: 1 (true)
remote_timeline_touchperiod = <number>
* How often, in seconds, while a search is running to touch remote
timeline
artifacts to keep the artifacts from being deleted by the remote peer.
* When set to ?0?: The remote timelines are never touched.
* Fractional seconds are allowed.
* Default: 300 (5 minutes)
timeline_events_preview = <bool>
* When set to ?true?: Display events in the Search app as the events are
scanned, including events that are in-memory and not yet committed,
instead
of waiting until all of the events are scanned to see the search
510
results.
You will not be able to expand the event information in the event
viewer
until events are committed.
* When set to ?false?: Events are displayed only after the events are
committed (the events are written to the disk).
* This setting might increase disk usage to temporarily save uncommitted
events while the search is running. Additionally, search performance
might
be impacted.
* Default: false
TTL
cache_ttl = <integer>
* The length of time, in seconds, to persist search cache entries.
* Default: 300 (5 minutes)
default_save_ttl = <integer>
* How long, in seconds, the ttl for a search artifact should be extended
in
response to the save control action.
* When set to 0, the system waits indefinitely.
* Default: 604800 (1 week)
failed_job_ttl = <integer>
* How long, in seconds, the search artifacts should be stored on disk
after
a job has failed. The ttl is computed relative to the modtime of the
status.csv file of the job, if the file exists, or the modtime of the
artifact directory for the search job.
* If a job is being actively viewed in the Splunk UI then the modtime
of
the status.csv file is constantly updated such that the reaper does
not
remove the job from underneath.
* Default: 86400 (24 hours)
511
remote_ttl = <integer>
* How long, in seconds, the search artifacts from searches run in behalf
of
a search head should be stored on the indexer after completion.
* Default: 600 (10 minutes)
ttl = <integer>
* How long, in seconds, the search artifacts should be stored on disk
after
the job completes. The ttl is computed relative to the modtime of the
status.csv file of the job, if the file exists, or the modtime of the
artifact directory for the search job.
* If a job is being actively viewed in the Splunk UI then the modtime
of
the status.csv file is constantly updated such that the reaper does
not
remove the job from underneath.
* Default: 600 (10 minutes)
check_search_marker_done_interval = <integer>
* The amount of time, in seconds, that elapses between checks of search
marker
files, such as hot bucket markers and backfill complete markers.
* This setting is used to identify when the remote search process on the
indexer completes processing all hot bucket and backfill portions of
the search.
* Default: 60
check_search_marker_sleep_interval = <integer>
* The amount of time, in seconds, that the process will sleep between
subsequent search marker file checks.
* This setting is used to put the process into sleep mode periodically
on the
indexer, then wake up and check whether hot buckets and backfill
portions
of the search are complete.
* Default: 1
srtemp_dir_ttl = <integer>
* The time to live, in seconds, for the temporary files and directories
within the intermediate search results directory tree.
* These files and directories are located in
$SPLUNK_HOME/var/run/splunk/srtemp.
* Every 'srtemp_dir_ttl' seconds, the reaper removes files and
directories
within this tree to reclaim disk space.
* The reaper measures the time to live through the newest file
modification time
within the directory.
* When set to 0, the reaper does not remove any files or directories in
this tree.
* Default: 86400 (24 hours)
512
############################################################################
# Unsupported settings
############################################################################
# This section contains settings that are no longer supported.
enable_status_cache = <bool>
* This is not a user tunable setting. Do not use this setting without
working in tandem with Splunk personnel. This setting is not tested
at
non-default.
* This controls whether the status cache is used, which caches
information
about search jobs (and job artifacts) in memory in main splunkd.
* Normally this cacheing is enabled and assists performance. However,
when
using Search Head Pooling, artifacts in the shared storage location
will be
changed by other search heads, so this cacheing is disabled.
* Explicit requests to jobs endpoints , eg /services/search/jobs/<sid>
are
always satisfied from disk, regardless of this setting.
* Defaults to true; except in Search Head Pooling environments where it
defaults to false.
############################################################################
# Unused settings
############################################################################
# This section contains settings that have been deprecated. These
settings
# remain listed in this file for backwards compatibility.
max_bucket_bytes = <integer>
* This setting has been deprecated and has no effect.
rr_min_sleep_ms = <int>
* REMOVED. This setting is no longer used.
rr_max_sleep_ms = <int>
* REMOVED. This setting is no longer used.
513
rr_sleep_factor = <int>
* REMOVED. This setting is no longer used.
# This section contains the stanzas for the SPL commands, except for the
[anomalousvalue]
maxresultrows = <integer>
* Configures the maximum number of events that can be present in memory
at one
time.
* Default: searchresults::maxresultsrows (which is by default 50000)
maxvalues = <integer>
* Maximum number of distinct values for a field.
* Default: 100000
maxvaluesize = <integer>
* Maximum size, in bytes, of any single value (truncated to this size if
larger).
* Default: 1000
[associate]
maxfields = <integer>
* Maximum number of fields to analyze.
* Default: 10000
maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* Default: 10000
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Default: 1000
514
[autoregress]
maxp = <integer>
* Maximum number of events for auto regression.
* Default: 10000
maxrange = <integer>
* Maximum magnitude of range for p values when given a range.
* Default: 1000
[concurrency]
batch_search_max_pipeline = <int>
* Controls the number of search pipelines launched at the indexer during
batch search.
* Increasing the number of search pipelines should help improve search
performance but there will be an increase in thread and memory usage.
* This value applies only to searches that run on remote indexers.
* Default: 1
max_count = <integer>
* Maximum number of detected concurrencies.
* Default: 10000000
[correlate]
maxfields = <integer>
* Maximum number of fields to correlate.
* Default: 1000
[ctable]
maxvalues = <integer>
* Maximum number of columns/rows to generate (the maximum number of
distinct
values for the row field and column field).
* Default: 1000
515
[discretize]
default_time_bins = <integer>
* When discretizing time for timechart or explicitly via bin, the
default bins
to use if no span or bins is specified.
* Default: 100
maxbins = <integer>
* Maximum number of bins to discretize into.
* If maxbins is not specified or = 0, it defaults to
searchresults::maxresultrows
* Default: 50000
[findkeywords]
maxevents = <integer>
* Maximum number of events used by the findkeywords command and the
Patterns tab.
* Default: 50000
[geomfilter]
enable_clipping = <boolean>
* Whether or not polygons are clipped to the viewport provided by the
render client.
* Default: true
enable_generalization = <boolean>
* Whether or not generalization is applied to polygon boundaries to
reduce
point count for rendering.
* Default: true
[geostats]
filterstrategy = <integer>
* Controls the selection strategy on the geoviz map.
* Valid values are 1 and 2.
maxzoomlevel = <integer>
* Controls the number of zoom levels that geostats will cluster events
516
on.
zl_0_gridcell_latspan = <float>
* Controls what is the grid spacing in terms of latitude degrees at the
lowest zoom level, which is zoom-level 0.
* Grid-spacing at other zoom levels are auto created from this value by
reducing by a factor of 2 at each zoom-level.
zl_0_gridcell_longspan = <float>
* Controls what is the grid spacing in terms of longitude degrees at the
lowest zoom level, which is zoom-level 0
* Grid-spacing at other zoom levels are auto created from this value by
reducing by a factor of 2 at each zoom-level.
[inputcsv]
mkdir_max_retries = <integer>
* Maximum number of retries for creating a tmp directory (with random
name as
subdir of SPLUNK_HOME/var/run/splunk)
* Default: 100
[iplocation]
db_path = <path>
* The absolute path to the GeoIP database in the MMDB format.
* The ?db_path? setting does not support standard Splunk environment
variables such as SPLUNK_HOME.
* Default: The database that is included with the Splunk platform.
[join]
subsearch_maxout = <integer>
* Maximum result rows in output from subsearch to join against.
* Default: 50000
subsearch_maxtime = <integer>
* Maximum search time, in seconds, before auto-finalization of
subsearch.
* Default: 60
subsearch_timeout = <integer>
* Maximum time, in seconds, to wait for subsearch to fully finish.
* Default: 120
517
[kmeans]
maxdatapoints = <integer>
* Maximum data points to do kmeans clusterings for.
* Default: 100000000 (100 million)
maxkrange = <integer>
* Maximum number of k values to iterate over when specifying a range.
* Default: 100
maxkvalue = <integer>
* Maximum number of clusters to attempt to solve for.
* Default: 1000
[lookup]
batch_index_query = <bool>
* Should non-memory file lookups (files that are too large) use batched
queries
to possibly improve performance?
* Default: true
batch_response_limit = <integer>
* When doing batch requests, the maximum number of matches to retrieve
if more than this limit of matches would otherwise be retrieve, we
will fall
back to non-batch mode matching
* Default: 5000000
max_matches = <integer>
* Maximum matches for a lookup.
* Valid values range from 1 - 1000.
* Default: 1000
max_memtable_bytes = <integer>
* Maximum size, in bytes, of static lookup file to use an in-memory
index for.
* Lookup files with size above max_memtable_bytes will be indexed on
disk
* A large value results in loading large lookup files in memory leading
to bigger
process memory footprint.
518
* Caution must be exercised when setting this parameter to arbitrarily
high values!
* Default: 10000000 (10MB)
max_reverse_matches = <integer>
* maximum reverse lookup matches (for search expansion)
* Default: 50
[metadata]
bucket_localize_max_lookahead = <int>
* This setting is only relevant when using remote storage.
* Specifies the maximum number of buckets the metadata command localizes
for look-ahead purposes, in addition to the required bucket.
* Increasing this value can improve performance, at the cost of
additional
network/io/disk utilization.
* Valid values are 0-64. Any value larger than 64 will be set to 64.
Other
invalid values will be discarded and the default will be substituted.
* Default: 10
maxcount = <integer>
* The total number of metadata search results returned by the search
head;
after the maxcount is reached, any additional metadata results
received from
the search peers will be ignored (not returned).
* A larger number incurs additional memory usage on the search head.
* Default: 100000
maxresultrows = <integer>
* The maximum number of results in a single chunk fetched by the
metadata
command
* A smaller value will require less memory on the search head in setups
with
large number of peers and many metadata results, though, setting this
too
small will decrease the search performance.
* NOTE: Do not change unless instructed to do so by Splunk Support.
* Default: 10000
[mvexpand]
519
* Overrides the default value for ?max_mem_usage_mb?.
* Limits the amount of RAM, in megabytes (MB), a batch of events or
results will
use in the memory of a search process.
* See definition in the [default] stanza for ?max_mem_usage_mb? for
more details.
* Default: 500
[mvcombine]
[outputlookup]
outputlookup_check_permission = <bool>
* Specifies whether the outputlookup command should verify that users
have write permissions to CSV lookup table files.
* outputlookup_check_permission is used in conjunction with the
transforms.conf setting check_permission.
* The system only applies outputlookup_check_permission to .csv lookup
configurations in transforms.conf that have check_permission=true.
* You can set lookup table file permissions in the .meta file for each
lookup
file, or through the Lookup Table Files page in Settings. By default,
only
users who have the admin or power role can write to a shared CSV
lookup
file.
* Default: false
[rare]
maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows
* Default: 50000
maxvalues = <integer>
520
* Maximum number of distinct field vector values to keep track of.
* Default: 100000
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Default: 1000
[set]
maxresultrows = <integer>
* The maximum number of results the set command will use from each
result
set to compute the required set operation.
* Default: 50000
[sort]
maxfiles = <integer>
* Maximum files to open at once. Multiple passes are made if the number
of
result chunks exceeds this threshold.
* Default: 64.
[spath]
extract_all = <boolean>
* Controls whether we respect automatic field extraction when spath is
invoked manually.
* If true, we extract all fields regardless of settings. If false, we
only
extract fields used by later search commands.
* Default: true
extraction_cutoff = <integer>
* For extract-all spath extraction mode, only apply extraction to the
first
<integer> number of bytes.
* Default: 5000
[stats|sistats]
approx_dc_threshold = <integer>
* When using approximate distinct count (i.e. estdc(<field>) in
stats/chart/timechart), do not use approximated results if the actual
number
521
of distinct values is less than this number
* Default: 1000
dc_digest_bits = <integer>
* 2^<integer> bytes will be size of digest used for approximating
distinct
count.
* Must be >= 8 (128B) and <= 16 (64KB)
* Default: 10 (equivalent to 1KB)
default_partitions = <int>
* Number of partitions to split incoming data into for
parallel/multithreaded reduce
* Default: 1
list_maxsize = <int>
* Maximum number of list items to emit when using the list() function
stats/sistats
* Default: 100
maxmem_check_freq = <integer>
* How frequently, in rows, to check to see if we are exceeding the in
memory data structure size limit as specified by ?max_mem_usage_mb?.
* Default: 50000
maxresultrows = <integer>
* Maximum number of rows allowed in the process memory.
* When the search process exceeds ?max_mem_usage_mb? and
?maxresultrows?,
data is spilled out to the disk.
* If not specified, defaults to searchresults::maxresultrows
* Default: 50000
max_stream_window = <integer>
* For the streamstats command, the maximum allow window size.
* Default: 10000
maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* When set to ?0?: Specifies an unlimited number of values.
* Default: 0
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* When set to ?0?: Specifies an unlimited number of values.
* Default: 0
max_valuemap_bytes = <integer>
* For the sistats command, the maximum encoded length of the valuemap,
per result written out.
* If limit is exceeded, extra result rows are written out as needed.
* 0 = no limit per row
522
* Default: 100000
natural_sort_output = <bool>
* Do a natural sort on the output of stats if output size is <=
maxresultrows
* Natural sort means that we sort numbers numerically and non-numbers
lexicographically
* Default: true
partitions_limit = <int>
* Maximum number of partitions to split into that can be specified via
the
'partitions' option.
* When exceeded, the number of partitions is reduced to this limit.
* Default: 100
perc_method = nearest-rank|interpolated
* Which method to use for computing percentiles (and medians=50
percentile).
* nearest-rank picks the number with 0-based rank R =
floor((percentile/100)*count)
* interpolated means given F = (percentile/100)*(count-1),
pick ranks R1 = floor(F) and R2 = ceiling(F).
Answer = (R2 * (F - R1)) + (R1 * (1 - (F - R1)))
* See wikipedia percentile entries on nearest rank and "alternative
methods"
* Default: nearest-rank
perc_digest_type = rdigest|tdigest
* Which digest algorithm to use for computing percentiles
( and medians=50 percentile).
* rdigest picks the rdigest_k, rdigest_maxnodes and perc_method
properties.
* tdigest picks the tdigest_k and tdigest_max_buffer_size properties.
* Default: tdigest
sparkline_maxsize = <int>
* Maximum number of elements to emit for a sparkline
* Default: The value of the ?list_maxsize? setting
sparkline_time_steps = <time-step-string>
* Specify a set of time steps in order of decreasing granularity. Use an
integer and one of the following time units to indicate each step.
* s = seconds
* m = minutes
* h = hours
* d = days
* month
* A time step from this list is selected based on the
<sparkline_maxsize> setting.
* The lowest <sparkline_time_steps> value that does not exceed the
maximum number
523
* of bins is used.
* Example:
* If you have the following configurations:
* <sparkline_time_steps> = 1s,5s,10s,30s,1m,5m,10m,30m,1h,1d,1month
* <sparkline_maxsize> = 100
* The timespan for 7 days of data is 604,800 seconds.
* Span = 604,800/<sparkline_maxsize>.
* If sparkline_maxsize = 100, then
span = (604,800 / 100) = 60,480 sec == 1.68 hours.
* The "1d" time step is used because it is the lowest value that does
not exceed
* the maximum number of bins.
* Default: 1s,5s,10s,30s,1m,5m,10m,30m,1h,1d,1month
rdigest_k = <integer>
* rdigest compression factor
* Lower values mean more compression
* After compression, number of nodes guaranteed to be greater than or
equal to
11 times k.
* Must be greater than or equal to 2.
* Default: 100
rdigest_maxnodes = <integer>
* Maximum rdigest nodes before automatic compression is triggered.
* When set to ?1?: Specifies to automatically configure based on k
value.
* Default: 1
tdigest_k = <integer>
* tdigest compression factor
* Higher values mean less compression, more mem usage, but better
accuracy.
* Must be greater than or equal to 1.
* Default: 50
tdigest_max_buffer_size = <integer>
* Maximum number of elements before automatic reallocation of buffer
storage is triggered.
* Smaller values result in less memory usage but is slower.
* Very small values (<100) are not recommended as they will be very
slow.
* Larger values help performance up to a point after which it actually
hurts performance.
* Recommended range is around 10tdigest_k to 30tdigest_k.
* Default: 1000
524
[top]
maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows.
* Default: 50000
maxvalues = <integer>
* Maximum number of distinct field vector values to keep track of.
* Default: 100000
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Default: 1000
[transactions]
maxopentxn = <integer>
* Specifies the maximum number of not yet closed transactions to keep in
the
open pool before starting to evict transactions.
* Default: 5000
maxopenevents = <integer>
* Specifies the maximum number of events (which are) part of open
transactions
before transaction eviction starts happening, using LRU policy.
* Default: 100000
[tscollect]
squashcase = <boolean>
* The default value of the 'squashcase' argument if not specified by the
command
* Default: false
keepresults = <boolean>
* The default value of the 'keepresults' argument if not specified by
the command
* Default: false
525
[tstats]
allow_old_summaries = <boolean>
* The default value of 'allow_old_summaries' arg if not specified by
the
command
* When running tstats on an accelerated datamodel,
allow_old_summaries=false
ensures we check that the datamodel search in each bucket's summary
metadata
is considered up to date with the current datamodel search. Only
summaries
that are considered up to date will be used to deliver results.
* The allow_old_summaries=true attribute overrides this behavior and
will deliver results
even from bucket summaries that are considered out of date with the
current
datamodel.
* Default: false
apply_search_filter = <boolean>
* Controls whether we apply role-based search filters when users run
tstats on
normal index data
* Note: we never apply search filters to data collected with tscollect
or
datamodel acceleration
* Default: true
bucket_localize_max_lookahead = <int>
* This setting is only relevant when using remote storage.
* Specifies the maximum number of buckets the tstats command localizes
for
look-ahead purposes, in addition to the required bucket.
* Increasing this value can improve performance, at the cost of
additional
network/io/disk utilization.
* Valid values are 0-64. Any value larger than 64 will be set to 64.
Other
invalid values will be discarded and the default will be substituted.
* Default: 10
526
(cannot be set lower than 10000)
* Larger values will tend to cause more memory to be used (per search)
and
might have performance benefits.
* Smaller values will tend to reduce performance and might reduce memory
used
(per search).
* Altering this value without careful measurement is not advised.
* Default: 10000000
summariesonly = <boolean>
* The default value of 'summariesonly' arg if not specified by the
command
* When running tstats on an accelerated datamodel, summariesonly=false
implies
a mixed mode where we will fall back to search for missing TSIDX data
* summariesonly=true overrides this mixed mode to only generate results
from
TSIDX data, which may be incomplete
* Default: false
warn_on_missing_summaries = <boolean>
* ADVANCED: Only meant for debugging summariesonly=true searches on
accelerated datamodels.
* When true, search will issue a warning for a tstats
summariesonly=true
search for the following scenarios:
a) If there is a non-hot bucket that has no corresponding datamodel
acceleration summary whatsoever.
b) If the bucket's summary does not match with the current
datamodel
acceleration search.
* Default: false
[typeahead]
cache_ttl_sec = <integer>
* How long, in seconds, the typeahead cached results are valid.
* Default 300
fetch_multiplier = <integer>
* A multiplying factor that determines the number of terms to fetch from
the
index, fetch = fetch_multiplier x count.
* Default: 50
max_concurrent_per_user = <integer>
* The maximum number of concurrent typeahead searches per user. Once
this
maximum is reached only cached typeahead results might be available
527
* Default: 3
maxcount = <integer>
* Maximum number of typeahead results to find.
* Default: 1000
min_prefix_length = <integer>
* The minimum length of the string prefix after which to provide
typeahead.
* Default: 1
use_cache = [0|1]
* Specifies whether the typeahead cache will be used if use_cache is not
specified in the command line or endpoint.
* Default: true or 1
[typer]
maxlen = <int>
* In eventtyping, pay attention to first <int> characters of any
attribute
(such as _raw), including individual tokens. Can be overridden by
supplying
the typer operator with the argument maxlen (for example,
"|typer maxlen=300").
* Default: 10000
[xyseries]
GENERAL SETTINGS
528
[authtokens]
expiration_time = <integer>
* Expiration time, in seconds, of auth tokens.
* Default: 3600 (60 minutes)
[auto_summarizer]
allow_event_summarization = <bool>
* Whether auto summarization of searches whose remote part returns
events
rather than results will be allowed.
* Default: false
cache_timeout = <integer>
* The minimum amount of time, in seconds, to cache auto summary details
and search hash codes.
* The cached entry expires randomly between cache_timeout and
2*cache_timeout value.
* Default: 600 (10 minutes)
detailed_dashboard = <bool>
* Turn on/off the display of both normalized and regular summaries in
the
Report Acceleration summary dashboard and details.
* Default: false
maintenance_period = <integer>
* The period of time, in seconds, that the auto summarization
maintenance
happens
* Default: 1800 (30 minutes)
max_run_stats = <int>
* Maximum number of summarization run statistics to keep track and
expose via
REST.
* Default: 48
max_verify_buckets = <int>
* When verifying buckets, stop after verifying this many buckets if no
failures
have been found
* 0 means never
* Default: 100
max_verify_bucket_time = <int>
* Maximum time, in seconds, to spend verifying each bucket.
* Default: 15
529
max_verify_ratio = <number>
* Maximum fraction of data in each bucket to verify
* Default: 0.1 (10%)
max_verify_total_time = <int>
* Maximum total time in seconds to spend doing verification, regardless
if any
buckets have failed or not
* When set to ?0?: Specifies no limit.
* Default: 0
normalized_summaries = <bool>
* Turn on/off normalization of report acceleration summaries.
* Default: true
return_actions_with_normalized_ids = [yes|no|fromcontext]
* Report acceleration summaries are stored under a signature/hash which
can be
regular or normalized.
* Normalization improves the re-use of pre-built summaries but is not
supported before 5.0. This config will determine the default value
of how
normalization works (regular/normalized)
* When set to ?fromcontext?: Specifies that the end points and
summaries
would be operating based on context.
* Normalization strategy can also be changed via admin/summarization
REST calls
with the "use_normalization" parameter which can take the values
"yes"/"no"/"fromcontext"
* Default: fromcontext
search_2_hash_cache_timeout = <integer>
* The amount of time, in seconds, to cache search hash codes
* Default: The value of the ?cache_timeout? setting, which by default is
600 (10 minutes)
shc_accurate_access_counts = <bool>
* Only relevant if you are using search head clustering
* Turn on/off to make acceleration summary access counts accurate on
the
captain.
* by centralizing
verify_delete = <bool>
* Should summaries that fail verification be automatically deleted?
* Default: false
530
[export]
add_offset = <bool>
* Add an offset/row number to JSON streaming output
* Default: true
add_timestamp = <bool>
* Add a epoch time timestamp to JSON streaming output that reflects the
time
the results were generated/retrieved
* Default: false
[extern]
perf_warn_limit = <integer>
* Warn when external scripted command is applied to more than this many
events
* When set to ?0?: Specifies for no message (message is always INFO
level)
* Default: 10000
[http_input]
max_content_length = <integer>
* The maximum length, in bytes, of HTTP request content that is
accepted by the HTTP Event Collector server.
* Default: 838860800 (~ 800 MB)
max_number_of_ack_channel = <integer>
* The maximum number of ACK channels accepted by HTTP Event Collector
server.
* Default: 1000000 (~ 1 million)
max_number_of_acked_requests_pending_query = <integer>
* The maximum number of ACKed requests pending query on HTTP Event
Collector server.
* Default: 10000000 (~ 10 million)
max_number_of_acked_requests_pending_query_per_ack_channel = <integer>
* The maximum number of ACKed requested pending query per ACK channel on
HTTP
Event Collector server..
* Default: 1000000 (~ 1 million)
531
metrics_report_interval = <integer>
* The interval, in seconds, of logging input metrics report.
* Default: 60 (1 minute)
[indexpreview]
max_preview_bytes = <integer>
* Maximum number of bytes to read from each file during preview
* Default: 2000000 (2 MB)
max_results_perchunk = <integer>
* Maximum number of results to emit per call to preview data generator
* Default: 2500
soft_preview_queue_size = <integer>
* Loosely-applied maximum on number of preview data objects held in
memory
* Default: 100
[inputproc]
file_tracking_db_threshold_mb = <integer>
* This setting controls the trigger point at which the file tracking db
(also
commonly known as the "fishbucket" or btree) rolls over. A new
database is
created in its place. Writes are targeted at new db. Reads are first
targeted at new db, and we fall back to old db for read failures. Any
reads
served from old db successfully will be written back into new db.
* MIGRATION NOTE: if this setting doesn't exist, the initialization code
in
splunkd triggers an automatic migration step that reads in the current
value
for "maxDataSize" under the "_thefishbucket" stanza in indexes.conf
and
writes this value into etc/system/local/limits.conf.
532
will
be limited to approximately this number.
* The number of file-content fingerprints added to the learned app's
sourcetypes.conf file will be limited to approximately this number.
* The tracking for uncompressed and compressed files is done separately,
so in
some cases this value may be exceeded.
* This limit is not the recommended solution for auto-identifying
sourcetypes.
The usual best practices are to set sourcetypes in input stanzas, or
alternatively to apply them based on filename pattern in props.conf
[source::<pattern>] stanzas.
* Default: 1000
max_fd = <integer>
* Maximum number of file descriptors that a ingestion pipeline in Splunk
will keep open, to capture any trailing data from files that are
written
to very slowly.
* Note that this limit will be applied per ingestion pipeline. For more
information about multiple ingestion pipelines see
parallelIngestionPipelines
in the server.conf.spec file.
* With N parallel ingestion pipelines the maximum number of file
descriptors that
can be open across all of the ingestion pipelines will be N * max_fd.
* Default: 100
monitornohandle_max_heap_mb = <integer>
* Controls the maximum memory used by the Windows-specific modular input
MonitorNoHandle in user mode.
* The memory of this input grows in size when the data being produced
by applications writing to monitored files comes in faster than the
Splunk
system can accept it.
* When set to 0, the heap size (memory allocated in the modular input)
can grow
without limit.
* If this size is limited, and the limit is encountered, the input will
drop
some data to stay within the limit.
* Default: 0
tailing_proc_speed = <integer>
* REMOVED. This setting is no longer used.
monitornohandle_max_driver_mem_mb = <integer>
* Controls the maximum NonPaged memory used by the Windows-specific
kernel driver of modular input
MonitorNoHandle.
* The memory of this input grows in size when the data being produced
by applications writing to monitored files comes in faster than the
533
Splunk
system can accept it.
* When set to 0, the NonPaged memory size (memory allocated in the
kernel driver of modular input) can grow
without limit.
* If this size is limited, and the limit is encountered, the input will
drop
some data to stay within the limit.
* Default: 0
monitornohandle_max_driver_records = <integer>
* Controls memory growth by limiting the maximum in-memory records
stored
by the kernel module of Windows-specific modular input
MonitorNoHandle.
* When monitornohandle_max_driver_mem_mb is set to > 0, this config is
ignored.
* monitornohandle_max_driver_mem_mb and
monitornohandle_max_driver_records are mutually exclusive.
* If the limit is encountered, the input will drop some data to stay
within the limit.
* Defaults to 500.
time_before_close = <integer>
* MOVED. This setting is now configured per-input in inputs.conf.
* Specifying this setting in limits.conf is DEPRECATED, but for now will
override the setting for all monitor inputs.
[journal_compression]
threads = <integer>
* Specifies the maximum number of indexer threads which will be work on
compressing hot bucket journal data.
* This setting does not typically need to be modified.
* Default: The number of CPU threads of the host machine
[kv]
avg_extractor_time = <integer>
* Maximum amount of CPU time, in milliseconds, that the average (over
search
results) execution time of a key-value pair extractor will be allowed
to take
before warning. Once the average becomes larger than this amount of
time a
warning will be issued
* Default: 500 (.5 seconds)
534
limit = <integer>
* The maximum number of fields that an automatic key-value field
extraction
(auto kv) can generate at search time.
* If search-time field extractions are disabled (KV_MODE=none in
props.conf)
then this setting determines the number of index-time fields that will
be
returned.
* The summary fields 'host', 'index', 'source', 'sourcetype',
'eventtype',
'linecount', 'splunk_server', and 'splunk_server_group' do not count
against
this limit and will always be returned.
* Increase this setting if, for example, you have indexed data with a
large
number of columns and want to ensure that searches display all fields
from
the data.
* Default: 100
maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Default: 10240 characters
maxcols = <integer>
* When non-zero, the point at which kv should stop creating new fields.
* Default: 512
max_extractor_time = <integer>
* Maximum amount of CPU time, in milliseconds, that a key-value pair
extractor
will be allowed to take before warning. If the extractor exceeds this
execution time on any event a warning will be issued
* Default: 1000 (1 second)
[kvstore]
535
acceleration
(i.e. an acceleration with multiple keys)
* Valid values range from 0 to 50
* Default: 10
[input_channels]
max_inactive = <integer>
* Internal setting, do not change unless instructed to do so by Splunk
Support.
lowater_inactive = <integer>
* Internal setting, do not change unless instructed to do so by Splunk
536
Support.
inactive_eligibility_age_seconds = <integer>
* Internal setting, do not change unless instructed to do so by Splunk
Support.
[ldap]
allow_multiple_matching_users = <bool>
* This controls whether we allow login when we find multiple entries
with the
same value for the username attribute
* When multiple entries are found, we choose the first user DN
lexicographically
* Setting this to false is more secure as it does not allow any
ambiguous
login, but users with duplicate entries will not be able to login.
* Default: true
[metrics]
interval = <integer>
* Number of seconds between logging splunkd metrics to metrics.log.
* Minimum of 10.
* Default: 30
maxseries = <integer>
* The number of series to include in the per_x_thruput reports in
metrics.log.
* Default: 10
[metrics:tcpin_connections]
aggregate_metrics = [true|false]
* For each splunktcp connection from forwarder, splunk logs metrics
information
every metrics interval.
* When there are large number of forwarders connected to indexer, the
amount of
information logged can take lot of space in metrics.log. When set to
true, it
537
will aggregate information across each connection and report only once
per
metrics interval.
* Default: false
suppress_derived_info = [true|false]
* For each forwarder connection, _tcp_Bps, _tcp_KBps, _tcp_avg_thruput,
_tcp_Kprocessed is logged in metrics.log.
* This can be derived from kb. When set to true, the above derived info
will
not be emitted.
* Default: false
[pdf]
[realtime]
alerting_period_ms = <int>
* This limits the frequency that we will trigger alerts during a
realtime search.
* A value of 0 means unlimited and we will trigger an alert for every
batch of
events we read in dense realtime searches with expensive alerts this
can
overwhelm the alerting system.
* Precedence: Searchhead
* Default: 0
blocking = [0|1]
* Specifies whether the indexer should block if a queue is full.
* Default: false
default_backfill = <bool>
538
* Specifies if windowed real-time searches should backfill events
* Default: true
enforce_time_order = <bool>
* Specifies if real-time searches should ensure that events are sorted
in
ascending time order (the UI will automatically reverse the order that
it
display events for real-time searches so in effect the latest events
will be
first)
* Default: true
indexfilter = [0|1]
* Specifies whether the indexer should prefilter events for efficiency.
* Default: 1 (true)
indexed_realtime_update_interval = <int>
* When you run an indexed realtime search, the list of searchable
buckets
needs to be updated. If the Splunk software is installed on a cluster,
the list of allowed primary buckets is refreshed. If not installed on
a cluster, the list of buckets, including any new hot buckets are
refreshed.
This setting controls the interval for the refresh. The setting must
be
less than the "indexed_realtime_disk_sync_delay" setting. If your
realtime
buckets transition from new to warm in less time than the value
specified
for the "indexed_realtime_update_interval" setting, data will be
skipped
by the realtime search in a clustered environment.
* Precedence: Indexers
* Default: 30
indexed_realtime_cluster_update_interval = <int>
* This setting is deprecated. Use the
"indexed_realtime_update_interval"
setting instead.
* While running an indexed realtime search, if we are on a cluster we
need to
update the list of allowed primary buckets. This controls the interval
that
we do this. And it must be less than the
indexed_realtime_disk_sync_delay. If
your buckets transition from Brand New to warm in less than this time
indexed
realtime will lose data in a clustered environment.
* Precedence: Indexers
* Default: 30
539
indexed_realtime_default_span = <int>
* An indexed realtime search is made up of many component historical
searches
that by default will span this many seconds. If a component search is
not
completed in this many seconds the next historical search will span
the extra
seconds. To reduce the overhead of running an indexed realtime search
you can
change this span to delay longer before starting the next component
historical search.
* Precedence: Indexers
* Default: 1
indexed_realtime_disk_sync_delay = <int>
* This settings controls the number of seconds to wait for disk flushes
to
finish when using indexed/continuous/pseudo realtime search so that
we see
all of the data.
* After indexing there is a non-deterministic period where the files on
disk
when opened by other programs might not reflect the latest flush to
disk,
particularly when a system is under heavy load.
* Precedence: SearchHead overrides Indexers
* Default: 60
indexed_realtime_maximum_span = <int>
* While running an indexed realtime search, if the component searches
regularly
take longer than indexed_realtime_default_span seconds, then indexed
realtime
search can fall more than indexed_realtime_disk_sync_delay seconds
behind
realtime. Use this setting to set a limit after which we will drop
data to
return back to catch back up to the specified delay from realtime, and
only
search the default span of seconds.
* Precedence: API overrides SearchHead overrides Indexers
* Default: 0 (unlimited)
indexed_realtime_use_by_default = <bool>
* Should we use the indexedRealtime mode by default
* Precedence: SearchHead
* Default: false
local_connect_timeout = <int>
* Connection timeout, in seconds, for an indexer's search process when
connecting to that indexer's splunkd.
* Default: 5
540
local_receive_timeout = <int>
* Receive timeout, in seconds, for an indexer's search process when
connecting to that indexer's splunkd.
* Default: 5
local_send_timeout = <int>
* Send timeout, in seconds, for an indexer's search process when
connecting
to that indexer's splunkd.
* Default: 5
max_blocking_secs = <int>
* Maximum time, in seconds, to block if the queue is full (meaningless
if blocking = false)
* 0 means no limit
* Default: 60
queue_size = <int>
* Size of queue for each real-time search (must be >0).
* Default: 10000
[restapi]
maxresultrows = <integer>
* Maximum result rows to be returned by /events or /results getters from
REST
API.
* Default: 50000
jobscontentmaxcount = <integer>
* Maximum length of a property in the contents dictionary of an entry
from
/jobs getter from REST API
* Value of 0 disables truncation
* Default: 0
541
exclamation point '!' are not allowed.
[reversedns]
rdnsMaxDutyCycle = <integer>
* Generate diagnostic WARN in splunkd.log if reverse dns lookups are
taking
more than this percent of time
* Range 0-100
* Default: 10
[sample]
maxsamples = <integer>
* Default: 10000
maxtotalsamples = <integer>
* Default: 100000
[scheduler]
action_execution_threads = <integer>
* Number of threads to use to execute alert actions, change this number
if your
alert actions take a long time to execute.
* This number is capped at 10.
* Default: 2
actions_queue_size = <integer>
* The number of alert notifications to queue before the scheduler
starts
blocking, set to 0 for infinite size.
* Default: 100
actions_queue_timeout = <integer>
* The maximum amount of time, in seconds, to block when the action queue
size is
full.
* Default: 30
alerts_expire_period = <integer>
* The amount of time, in seconds, between expired alert removal
* This period controls how frequently the alerts list is scanned, the
only
benefit from reducing this is better resolution in the number of
alerts fired
at the savedsearch level.
542
* Change not recommended.
* Default: 120
alerts_max_count = <integer>
* Maximum number of unexpired alerts information to keep for the alerts
manager, when this number is reached Splunk will start discarding the
oldest
alerts.
* Default: 50000
alerts_max_history = <integer>[s|m|h|d]
* Maximum time to search in the past for previously triggered alerts.
* splunkd uses this property to populate the Activity -> Triggered
Alerts
page at startup.
* Values greater than the default may cause slowdown.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute,
mins,
minutes, h, hr, hour, hrs, hours, d, day, days.
* Default: 7d
alerts_scoping = host|splunk_server|all
* Determines the scoping to use on the search to populate the triggered
alerts
page. Choosing splunk_server will result in the search query
using splunk_server=local, host will result in the search query using
host=<search-head-host-name>, and all will have no scoping added to
the
search query.
* Default: splunk_server
auto_summary_perc = <integer>
* The maximum number of concurrent searches to be allocated for auto
summarization, as a percentage of the concurrent searches that the
scheduler
can run.
* Auto summary searches include:
* Searches which generate the data for the Report Acceleration
feature.
* Searches which generate the data for Data Model acceleration.
* Note: user scheduled searches take precedence over auto summary
searches.
* Default: 50
auto_summary_perc.<n> = <integer>
auto_summary_perc.<n>.when = <cron string>
* The same as auto_summary_perc but the value is applied only when the
cron
string matches the current time. This allows auto_summary_perc to
have
different values at different times of day, week, month, etc.
* There may be any number of non-negative <n> that progress from least
543
specific
to most specific with increasing <n>.
* The scheduler looks in reverse-<n> order looking for the first match.
* If either these settings aren't provided at all or no "when" matches
the
current time, the value falls back to the non-<n> value of
auto_summary_perc.
concurrency_message_throttle_time = <int>[s|m|h|d]
* Amount of time controlling throttling between messages warning about
scheduler
concurrency limits.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute,
mins,
minutes, h, hr, hour, hrs, hours, d, day, days.
* Default: 10m
introspection_lookback = <duration-specifier>
* The amount of time to "look back" when reporting introspection
statistics.
* For example: what is the number of dispatched searches in the last 60
minutes?
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to
1.
* Relevant units are: m, min, minute, mins, minutes, h, hr, hour, hrs,
hours,
d, day, days, w, week, weeks.
* For example: "5m" = 5 minutes, "1h" = 1 hour.
* Default: 1h
max_action_results = <integer>
* The maximum number of results to load when triggering an alert action.
* Default: 50000
max_continuous_scheduled_search_lookback = <duration-specifier>
* The maximum amount of time to run missed continuous scheduled searches
for
once Splunk comes back up in the event it was down.
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to
1.
* Relevant units are: m, min, minute, mins, minutes, h, hr, hour, hrs,
hours,
d, day, days, w, week, weeks, mon, month, months.
* For example: "5m" = 5 minutes, "1h" = 1 hour.
* A value of 0 means no lookback.
* Default: 24h
max_lock_files = <int>
* The number of most recent lock files to keep around.
* This setting only applies in search head pooling.
max_lock_file_ttl = <int>
544
* Time, in seconds, that must pass before reaping a stale lock file.
* Only applies in search head pooling.
max_per_result_alerts = <int>
* Maximum number of alerts to trigger for each saved search instance (or
real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit
* Default: 500
max_per_result_alerts_time = <integer>
* Maximum number of time, in seconds, to spend triggering alerts for
each saved search
instance (or real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit.
* Default: 300 (5 minutes)
max_searches_perc = <integer>
* The maximum number of searches the scheduler can run, as a percentage
of the
maximum number of concurrent searches, see [search]
max_searches_per_cpu for
how to set the system wide maximum number of searches.
* Default: 50
max_searches_perc.<n> = <integer>
max_searches_perc.<n>.when = <cron string>
* The same as max_searches_perc but the value is applied only when the
cron
string matches the current time. This allows max_searches_perc to
have
different values at different times of day, week, month, etc.
* There may be any number of non-negative <n> that progress from least
specific
to most specific with increasing <n>.
* The scheduler looks in reverse-<n> order looking for the first match.
* If either these settings aren't provided at all or no "when" matches
the
current time, the value falls back to the non-<n> value of
max_searches_perc.
persistance_period = <integer>
* The period, in seconds, between scheduler state persistance to disk.
The
scheduler currently persists the suppression and fired-unexpired
alerts to
disk.
* This is relevant only in search head pooling mode.
* Default: 30
priority_runtime_factor = <double>
* The amount to scale the priority runtime adjustment by.
* Every search's priority is made higher (worse) by its typical running
545
time.
Since many searches run in fractions of a second and the priority is
integral, adjusting by a raw runtime wouldn't change the result;
therefore,
it's scaled by this value.
* Default: 10
priority_skipped_factor = <double>
* The amount to scale the skipped adjustment by.
* A potential issue with the priority_runtime_factor is that now
longer-running
searches may get starved. To balance this out, make a search's
priority
lower (better) the more times it's been skipped. Eventually, this
adjustment
will outweigh any worse priority due to a long runtime. This value
controls
how quickly this happens.
* Default: 1
saved_searches_disabled = <bool>
* Whether saved search jobs are disabled by the scheduler.
* Default: false
scheduled_view_timeout = <int>[s|m|h|d]
* The maximum amount of time that a scheduled view (pdf delivery) would
be
allowed to render
* Relevant units are: s, sec, second, secs, seconds, m, min, minute,
mins,
minutes, h, hr, hour, hrs, hours, d, day, days.
* Default: 60m
shc_role_quota_enforcement = <bool>
* When this attribute is enabled, the search head cluster captain
enforces
user-role quotas for scheduled searches globally (cluster-wide).
* A given role can have (n *number_of_members) searches running
cluster-wide,
where n is the quota for that role as defined by srchJobsQuota and
rtSrchJobsQuota on the captain and number_of_members include the
members
capable of running scheduled searches.
* Scheduled searches will therefore not have an enforcement of user role
quota on a per-member basis.
546
* Role-based disk quota checks (srchDiskQuota in authorize.conf) can be
enforced only on a per-member basis.
These checks are skipped when shc_role_quota_enforcement is enabled.
* Quota information is conveyed from the members to the captain. Network
delays
can cause the quota calculation on the captain to vary from the actual
values
in the members and may cause search limit warnings. This should clear
up as
the information is synced.
* Default: false
shc_syswide_quota_enforcement = <bool>
* When this is enabled, Maximum number of concurrent searches is
enforced
globally (cluster-wide) by the captain for scheduled searches.
Concurrent searches include both scheduled searches and ad hoc
searches.
* This is (n * number_of_members) where n is the max concurrent
searches per node
(see max_searches_per_cpu for a description of how this is computed)
and
number_of_members include members capable of running scheduled
searches.
* Scheduled searches will therefore not have an enforcement of
instance-wide
concurrent search quota on a per-member basis.
* Note that this does not control the enforcement of the scheduler
quota.
For a search head cluster, that is defined as
(max_searches_perc * number_of_members)
and is always enforced globally on the captain.
* Quota information is conveyed from the members to the captain. Network
delays
can cause the quota calculation on the captain to vary from the actual
values
in the members and may cause search limit warnings. This should clear
up as
the information is synced.
* Default: false
shc_local_quota_check = <bool>
* DEPRECATED. Local (per-member) quota check is enforced by default.
* To disable per-member quota checking, enable one of the cluster-wide
quota
checks (shc_role_quota_enforcement or shc_syswide_quota_enforcement).
* For example, setting shc_role_quota_enforcement=true turns off local
role
quota enforcement for all nodes in the cluster and is enforced
cluster-wide
by the captain.
547
shp_dispatch_to_slave = <bool>
* By default the scheduler should distribute jobs throughout the pool.
* Default: true
search_history_load_timeout = <duration-specifier>
* The maximum amount of time to defer running continuous scheduled
searches
while waiting for the KV Store to come up in order to load historical
data.
This is used to prevent gaps in continuous scheduled searches when
splunkd
was down.
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to
1.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute,
mins,
minutes.
* For example: "60s" = 60 seconds, "5m" = 5 minutes.
* Default: 2m
[search_metrics]
debug_metrics = <bool>
* This indicates whether we should output more detailed search metrics
for
debugging.
* This will do things like break out where the time was spent by peer,
and may
add additional deeper levels of metrics.
* This is NOT related to "metrics.log" but to the "Execution Costs" and
"Performance" fields in the Search inspector, or the count_map in the
info.csv file.
* Default: false
[show_source]
distributed = <bool>
* Controls whether we will do a distributed search for show source to
get
events from all servers and indexes
* Turning this off results in better performance for show source, but
events
548
will only come from the initial server and index
* NOTE: event signing and verification is not supported in distributed
mode
* Default: true
max_count = <integer>
* Maximum number of events accessible by show_source.
* The show source command will fail when more than this many events are
in the
same second as the requested event.
* Default: 10000
max_timeafter = <timespan>
* Maximum time after requested event to show.
* Default: '1day' (86400 seconds)
max_timebefore = <timespan>
* Maximum time before requested event to show.
* Default: '1day' (86400 seconds)
[rex]
match_limit = <integer>
* Limits the amount of resources that are spent by PCRE
when running patterns that will not match.
* Use this to set an upper bound on how many times PCRE calls an
internal
function, match(). If set too low, PCRE might fail to correctly match
a pattern.
* Default: 100000
depth_limit = <integer>
* Limits the amount of resources that are spent by PCRE
when running patterns that will not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
549
function, match(). If set too low, PCRE might fail to correctly match
a pattern.
* Default: 1000
[slc]
maxclusters = <integer>
* Maximum number of clusters to create.
* Default: 10000.
[slow_peer_disconnect]
# This stanza contains settings for the heuristic that will detect and
# disconnect slow peers towards the end of a search that has returned a
# large volume of data.
batch_search_activation_fraction = <double>
* The fraction of peers that must have completed before we start
disconnecting.
* This is only applicable to batch search because the slow peers will
not hold back the fast peers.
* Default: 0.9
bound_on_disconnect_threshold_as_fraction_of_mean = <double>
* The maximum value of the threshold data rate we will use to determine
if a peer is slow. The actual threshold will be computed dynamically
at search time but will never exceed
(100*maximum_threshold_as_fraction_of_mean)% on either side of the
mean.
* Default: 0.2
disabled = <boolean>
* Specifies if this feature is enabled.
* Default: true
grace_period_before_disconnect = <double>
* If the heuristic consistently claims that the peer is slow for at
least
<grace_period_before_disconnect>*life_time_of_collector seconds then
only
will we disconnect the peer
* Default: 0.1
550
sensitivity = <double>
* Sensitivity of the heuristic to newer values. For larger values of
sensitivity the heuristic will give more weight to newer statistic.
* Default: 0.3
[summarize]
bucket_refresh_interval = <int>
* When poll_buckets_until_maxtime is enabled in a non-clustered
environment, this is the minimum amount of time (in seconds)
between bucket refreshes.
* Default: 30
bucket_refresh_interval_cluster = <int>
* When poll_buckets_until_maxtime is enabled in a clustered
environment, this is the minimum amount of time (in seconds)
between bucket refreshes.
* Default: 120
hot_bucket_min_new_events = <integer>
* The minimum number of new events that need to be added to the hot
bucket
(since last summarization) before a new summarization can take place.
To disable hot bucket summarization set this value to a * large
positive
number.
* Default: 100000
551
there
are not enough events (determined by the hot_bucket_min_new_events
attribute).
* Default: 900 (15 minutes)
max_summary_ratio = <float>
* A number in the [0-1] range that indicates the maximum ratio of
summary data / bucket size at which point the summarization of that
bucket, for the particular search, will be disabled. Use 0 to disable.
* Default: 0
max_summary_size = <int>
* Size of summary, in bytes, at which point we'll start applying the
max_summary_ratio. Use 0 to disable.
* Default: 0
max_time = <int>
* The maximum amount of time, seconds, that a summary search process is
allowed to run.
* Use 0 to disable.
* Default: 0
poll_buckets_until_maxtime = <bool>
* Only modify this setting when you are directed to do so by Support.
* Use the datamodels.conf setting
acceleration.poll_buckets_until_maxtime
for individual data models that are sensitive to summarization latency
delays.
* Default: false
sleep_seconds = <integer>
* The amount of time, in seconds, to sleep between polling of
summarization
complete status.
* Default: 5
stale_lock_seconds = <integer>
* The amount of time, in seconds, to have elapse since the mod time of
a .lock file before summarization considers * that lock file stale
and removes it.
* Default: 600
552
[system_checks]
orphan_searches = enabled|disabled
* Enables/disables automatic UI message notifications to admins for
scheduled saved searches with invalid owners.
* Scheduled saved searches with invalid owners are considered
"orphaned".
They cannot be run because Splunk cannot determine the roles to use
for
the search context.
* Typically, this situation occurs when a user creates scheduled
553
searches
then departs the organization or company, causing their account to
be
deactivated.
* Currently this check and any resulting notifications occur on system
startup and every 24 hours thereafter.
* Default: enabled
[thruput]
maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is
processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
the number of events this indexer processes to the rate (in
kilobytes per second) that you specify.
* NOTE:
* There is no guarantee that the thruput processor
will always process less than the number of kilobytes per
second that you specify with this setting. The status of
earlier processing queues in the pipeline can cause
temporary bursts of network activity that exceed what
is configured in the setting.
* The setting does not limit the amount of data that is
written to the network from the tcpoutput processor, such
as what happens when a universal forwarder sends data to
an indexer.
* The thruput processor applies the 'maxKBps' setting for each
ingestion pipeline. If you configure multiple ingestion
pipelines, the processor multiplies the 'maxKBps' value
by the number of ingestion pipelines that you have
configured.
* For more information about multiple ingestion pipelines, see
the 'parallelIngestionPipelines' setting in the
server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256
[viewstates]
enable_reaper = <boolean>
* Controls whether the viewstate reaper runs
* Default: true
reaper_freq = <integer>
* Controls how often, in seconds, the viewstate reaper runs.
* Default: 86400 (24 hours)
554
reaper_soft_warn_level = <integer>
* Controls what the reaper considers an acceptable number of viewstates.
* Default: 1000
ttl = <integer>
* Controls the age, in seconds, at which a viewstate is considered
eligible
for reaping
* Default: 86400 (24 hours)
[scheduled_views]
# Scheduled views are hidden [saved searches / reports] that trigger PDF
generation
# for a dashboard. When a user enables scheduled PDF delivery in the
dashboard UI,
# scheduled views are created.
#
# The naming pattern for scheduled views is
_ScheduledView__<view_name>,
# where <view_name> is the name of the corresponding dashboard.
#
# The scheduled views reaper, if enabled, runs periodically to look for
# scheduled views that have been orphaned. A scheduled view becomes
orphaned
# when its corresponding dashboard has been deleted. The scheduled
views reaper
# deletes these orphaned scheduled views. The reaper only deletes
scheduled
# views if the scheduled views have not been disabled and their
permissions
# have not been modified.
enable_reaper = <boolean>
* Controls whether the scheduled views reaper runs, as well as whether
* scheduled views are deleted when the dashboard they reference is
deleted.
* Default: true
reaper_freq = <integer>
* Controls how often, in seconds, the scheduled views reaper runs.
* Default: 86400 (24 hours)
OPTIMIZATION
555
[search_optimization]
enabled = <bool>
* Enables search optimizations
* Default: true
[search_optimization::search_expansion]
enabled = <bool>
* Enables optimizer-based search expansion.
* This enables the optimizer to work on pre-expanded searches.
* Default: true
[search_optimization::replace_append_with_union]
enabled = <bool>
* Enables replace append with union command optimization
* Default: true
[search_optimization::merge_union]
enabled = <bool>
* Merge consecutive unions
* Default: true
[search_optimization::predicate_merge]
enabled = <bool>
* Enables predicate merge optimization
* Default: true
inputlookup_merge = <bool>
* Enables predicate merge optimization to merge predicates into
inputlookup
* predicate_merge must be enabled for this optimization to be performed
* Default: true
merge_to_base_search = <bool>
* Enable the predicate merge optimization to merge the predicates into
the first search in the pipeline.
* Default: true
556
fields_black_list = <fields_list>
* A comma-separated list of fields that will not be merged into the
first search in the pipeline.
* If a field contains sub-tokens as values, then the field should be
added to fields_black_list
* Default: no default
[search_optimization::predicate_push]
enabled = <bool>
* Enables predicate push optimization
* Default: true
[search_optimization::predicate_split]
enabled = <bool>
* Enables predicate split optimization
* Default: true
[search_optimization::projection_elimination]
enabled = <bool>
* Enables projection elimination optimization
* Default: true
[search_optimization::required_field_values]
enabled = <bool>
* Enables required field value optimization
* Default: true
fields = <comma-separated-string>
* Provide a comma-separated-list of field names to optimize.
* Currently the only valid field names are eventtype and tag.
* Optimization of event type and tag field values applies to
transforming searches.
This optimization ensures that only the event types and tags neccesary
to process a search are loaded by the search processor.
557
* Only change this setting if you need to troubleshoot an issue.
* Default: eventtype, tag
[search_optimization::search_flip_normalization]
enabled = <bool>
* Enables predicate flip normalization.
* This type of normalization takes 'where' command statements
in which the value is placed before the field name and reverses
them so that the field name comes first.
* Predicate flip normalization only works for numeric values and
string values where the value is surrounded by quotes.
* Predicate flip normalization also prepares searches to take
advantage of predicate merge optimization.
* Disable search_flip_normalization if you determine that it is
causing slow search performance.
* Default: true
[search_optimization::reverse_calculated_fields]
enabled = <bool>
* Enables reversing of calculated fields optimization.
* Default: true
[search_optimization::search_sort_normalization]
enabled = <bool>
* Enables predicate sort normalization.
* This type of normalization applies lexicographical sorting logic
to 'search' command expressions and 'where' command statements,
so they are consistently ordered in the same way.
* Disable search_sort_normalization if you determine that it is
causing slow search performance.
* Default: true
[search_optimization::eval_merge]
enabled = <bool>
* Enables a search language optimization that combines two consecutive
"eval" statements into one and can potentially improve search
performance.
* There should be no side-effects to enabling this setting and need not
be changed unless you are troubleshooting an issue with search
results.
* Default: true
558
[search_optimization::replace_table_with_fields]
enabled = <bool>
* Enables a search language optimization that replaces the table command
with the fields command
in reporting or stream reporting searches
* There should be no side-effects to enabling this setting and need not
be changed unless you are troubleshooting an issue with search
results.
* Default: true
[directives]
required_tags = enabled|disabled
* Enables the use of the required tags directive, which allows the
search
processor to load only the required tags from the conf system.
* Disable this setting only to troubleshoot issues with search results.
* Default: true
required_eventtypes = enabled|disabled
* Enables the use of the required eventtypes directive, which allows the
search
processor to load only the required event types from the conf system.
* Disable this setting only to troubleshoot issues with search results.
* Default: true
read_summary = enabled|disabled
* Enables the use of the read summary directive, which allows the search
processor to leverage existing data model acceleration summary data
when it
performs event searches.
* Disable this setting only to troubleshoot issues with search results.
* Default: true
[parallelreduce]
559
* Default: 4
reducers = <string>
* Use this setting to configure one or more valid indexers as dedicated
intermediate reducers for parallel reduce search operations. Only
healthy
search peers are valid indexers.
* For <string>, specify the indexer host and port using the following
format -
host:port. Separate each host:port pair with a comma to specify a list
of
intermediate reducers.
* If the 'reducers' list includes one or more valid indexers, all of
those
indexers (and only these indexers) are used as intermediate reducers
when you
run a parallel reduce search. If the number of valid indexers in the
'reducers' list exceeds 'maxReducersPerPhase', the Splunk software
randomly
selects the set of indexers that are used as intermediate reducers.
* If all of the indexers in the 'reducers' list are invalid, the search
runs
without parallel reduction. All reduce operations for the search are
processed on the search head.
* If 'reducers' is empty or not configured, all valid indexers are
potential
intermediate reducer candidates. The Splunk software randomly selects
valid
indexers as intermediate reducers with limits determined by the
'winningRate'
and 'maxReducersPerPhase' settings.
* Default: ""
560
indexers.
* If 1 is specified, the search head attempts to use 1% of the indexers.
* The minimum number of indexers used as intermediate reducers is 1.
* The maximum number of indexers used as intermediate reducers is the
value of
'maxReducersPerPhase'.
* Default: 50
limits.conf.example
# Version 7.2.1
# CAUTION: Do not alter the settings in limits.conf unless you know what
you are doing.
# Improperly configured limits may result in splunkd crashes and/or
memory overuse.
[searchresults]
maxresultrows = 50000
# maximum number of times to try in the atomic write operation (1 = no
retries)
tocsv_maxretry = 5
# retry period is 1/2 second (500 milliseconds)
tocsv_retryperiod_ms = 500
[subsearch]
# maximum number of results to return from a subsearch
maxout = 100
# maximum number of seconds to run a subsearch before finalizing
maxtime = 10
# time to cache a given subsearch's results
ttl = 300
[anomalousvalue]
maxresultrows = 50000
# maximum number of distinct values for a field
maxvalues = 100000
# maximum size in bytes of any single value (truncated to this size if
larger)
maxvaluesize = 1000
[associate]
maxfields = 10000
maxvalues = 10000
maxvaluesize = 1000
561
maxvalues = 1000
[correlate]
maxfields = 1000
# for bin/bucket/discretize
[discretize]
maxbins = 50000
# if maxbins not specified or = 0, defaults to
searchresults::maxresultrows
[inputcsv]
# maximum number of retries for creating a tmp directory (with random
name in
# SPLUNK_HOME/var/run/splunk)
mkdir_max_retries = 100
[kmeans]
maxdatapoints = 100000000
[kv]
# when non-zero, the point at which kv should stop creating new columns
maxcols = 512
[rare]
maxresultrows = 50000
# maximum distinct value vectors to keep track of
maxvalues = 100000
maxvaluesize = 1000
[restapi]
# maximum result rows to be returned by /events or /results getters from
REST
# API
maxresultrows = 50000
[search]
# how long searches should be stored on disk once completed
ttl = 86400
# the last accessible event in a call that takes a base and bounds
max_count = 10000
562
# By default, we will not retry searches in the event of indexer
# failures with indexer clustering enabled.
# Hence, the default value for search_retry here is false.
search_retry = false
[scheduler]
[slc]
# maximum number of clusters to create
maxclusters = 10000
[findkeywords]
563
#events to use in findkeywords command (and patterns UI)
maxevents = 50000
[stats]
maxresultrows = 50000
maxvalues = 10000
maxvaluesize = 1000
[top]
maxresultrows = 50000
# maximum distinct value vectors to keep track of
maxvalues = 100000
maxvaluesize = 1000
[search_optimization]
enabled = true
[search_optimization::predicate_split]
enabled = true
[search_optimization::predicate_push]
enabled = true
[search_optimization::predicate_merge]
enabled = true
inputlookup_merge = true
merge_to_base_search = true
[search_optimization::projection_elimination]
enabled = true
cmds_black_list = eval, rename
[search_optimization::search_flip_normalization]
enabled = true
[search_optimization::reverse_calculated_fields]
enabled = true
[search_optimization::search_sort_normalization]
enabled = true
[search_optimization::replace_table_with_fields]
enabled = true
564
literals.conf
The following are the spec and example files for literals.conf.
literals.conf.spec
# Version 7.2.1
#
# This file contains attribute/value pairs for configuring externalized
strings
# in literals.conf.
#
# There is a literals.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a literals.conf in
$SPLUNK_HOME/etc/system/local/. For
# examples, see literals.conf.example. You must restart Splunk to
enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For the full list of all literals that can be overridden, check out
# $SPLUNK_HOME/etc/system/default/literals.conf.
########################################################################################
#
# CAUTION:
#
# - You can destroy Splunk's performance by editing literals.conf
incorrectly.
#
# - Only edit the attribute values (on the right-hand side of the
'=').
# DO NOT edit the attribute names (left-hand side of the '=').
#
# - When strings contain "%s", do not add or remove any occurrences
of %s, or
# reorder their positions.
#
# - When strings contain HTML tags, take special care to make sure that
all
# tags and quoted attributes are properly closed, and that all
entities such
# as & are escaped.
565
#
literals.conf.example
# Version 7.2.1
#
# This file contains an example literals.conf, which is used to
# configure the externalized strings in Splunk.
#
# For the full list of all literals that can be overwritten, consult
# the far longer list in $SPLUNK_HOME/etc/system/default/literals.conf
#
[ui]
PRO_SERVER_LOGIN_HEADER = Login to Splunk (guest/guest)
INSUFFICIENT_DISK_SPACE_ERROR = The server's free disk space is too
low. Indexing will temporarily pause until more disk space becomes
available.
SERVER_RESTART_MESSAGE = This Splunk Server's configuration has been
changed. The server needs to be restarted by an administrator.
UNABLE_TO_CONNECT_MESSAGE = Could not connect to splunkd at %s.
macros.conf
The following are the spec and example files for macros.conf.
macros.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for search language
macros.
566
[<STANZA_NAME>]
args = <string>,<string>,...
* A comma-delimited string of argument names.
* Argument names can only contain alphanumeric characters, underscores
'_', and
hyphens '-'.
* If the stanza name indicates that this macro takes no arguments, this
attribute will be ignored.
* This list cannot contain any repeated elements.
definition = <string>
* The string that the macro will expand to, with the argument
substitutions
made. (The exception is when iseval = true, see below.)
* Arguments to be substituted must be wrapped by dollar signs ($), for
example:
"the last part of this string will be replaced by the value of
argument foo $foo$".
* Splunk replaces the $<arg>$ pattern globally in the string, even
inside of
quotes.
validation = <string>
* A validation string that is an 'eval' expression. This expression
must
evaluate to a boolean or a string.
* Use this to verify that the macro's argument values are acceptable.
* If the validation expression is boolean, validation succeeds when it
returns
true. If it returns false or is NULL, validation fails, and Splunk
567
returns
the error message defined by the attribute, errormsg.
* If the validation expression is not boolean, Splunk expects it to
return a
string or NULL. If it returns NULL, validation is considered a
success.
Otherwise, the string returned is the error string.
errormsg = <string>
* The error message to be displayed if validation is a boolean
expression and
it does not evaluate to true.
iseval = <true/false>
* If true, the definition attribute is expected to be an eval expression
that
returns a string that represents the expansion of this macro.
* Defaults to false.
description = <string>
* OPTIONAL. Simple english description of what the macro does.
macros.conf.example
# Version 7.2.1
#
# Example macros.conf
#
568
be
# invoked equivalently as `foobar(1,2)` `foobar(foo=1,bar=2)` or
# `foobar(bar=2,foo=1)`
[foobar(2)]
args = foo, bar
definition = "foo = $foo$, bar = $bar$"
# macro showing simple boolean validation, where if foo > bar is not
true,
# errormsg is displayed
[foovalid(2)]
args = foo, bar
definition = "foo = $foo$ and bar = $bar$"
validation = foo > bar
errormsg = foo must be greater than bar
messages.conf
The following are the spec and example files for messages.conf.
messages.conf.spec
# Version 7.2.1
#
# This file contains attribute/value pairs for configuring externalized
strings
# in messages.conf.
569
#
# There is a messages.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a messages.conf in
$SPLUNK_HOME/etc/system/local/. You
# must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For the full list of all messages that can be overridden, check out
# $SPLUNK_HOME/etc/system/default/messages.conf
#
# The full name of a message resource is component_key + ':' +
message_key.
# After a descriptive message key, append two underscores, and then use
the
# letters after the % in printf style formatting, surrounded by
underscores.
#
# For example, assume the following message resource is defined:
#
# [COMPONENT:MSG_KEY__D_LU_S]
# message = FunctionX returned %d, expected %lu.
# action = See %s for details.
#
# The message key expected 3 printf style arguments (%d, %lu, %s), which
can be
# in either the message or action fields but mist appear in the same
order.
#
# In addition to the printf style arguments above, some custom UI
patterns are
# allowed in the message and action fields. These patterns will be
rendered by
# the UI before displaying the text.
#
# For example, linking to a specific Splunk page can be done using this
pattern:
#
# [COMPONENT:MSG_LINK__S]
# message = License key '%s' is invalid.
# action = See [[/manager/system/licensing|Licensing]] for details.
#
# Another custom formatting option is for date/time arguments. If the
argument
# should be rendered in local time and formatted to a specific langauge,
simply
# provide the unix timestamp and prefix the printf style argument with
570
"$t".
# This will hint that the argument is actually a timestamp (not a
number) and
# should be formatted into a date/time string.
#
# The language and timezone used to render the timestamp is determined
during
# render time given the current user viewing the message - it is not
required to
# provide these details here.
#
# For example, assume the following message resource is defined:
#
# [COMPONENT:TIME_BASED_MSG__LD]
# message = Component exception @ $t%ld.
# action = See splunkd.log for details.
#
# The first argument is prefixed with "$t", and therefore will be
treated as a
# unix timestamp. It will be formatted as a date/time string.
#
# For these and other examples, check out
# $SPLUNK_HOME/etc/system/README/messages.conf.example
#
############################################################################
# Component
############################################################################
[<component>]
name = <string>
* The human-readable name used to prefix all messages under this
component
* Required
############################################################################
# Message
############################################################################
[<component>:<key>]
message = <string>
* The message string describing what and why something happened
* Required
message_alternate = <string>
571
* An alternative static string for this message
* Any arguments will be ignored
* Defaults to nothing
action = <string>
* The action string describing the next steps in reaction to the
message
* Defaults to nothing
severity = critical|error|warn|info|debug
* The severity of the message
* Defaults to warn
target = [auto|ui|log|ui,log|none]
* Sets the message display target.
* "auto" means the message display target is automatically determined
by
context.
* "ui" messages are displayed by in Splunk Web and can be passed on
from
search peers to search heads in a distributed search environment.
* "log" messages are displayed only in the log files for the
instance, under
the BulletinBoard component, with log levels that respect their
message
severity. For example, messages with severity "info" are displayed
as INFO
log entries.
* "ui,log" combines the functions of the "ui" and "log" options.
* "none" completely hides the message (please consider using "log" and
reducing severity instead, using "none" may impact diagnosability).
* Default: auto
572
messages.conf.example
# Version 7.2.1
#
# This file contains an example messages.conf of attribute/value pairs
for
# configuring externalized strings.
#
# There is a messages.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a messages.conf in
$SPLUNK_HOME/etc/system/local/. You
# must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For the full list of all literals that can be overridden, check out
# $SPLUNK_HOME/etc/system/default/messages.conf
[DISK_MON]
name = Disk Monitor
[DISK_MON:INSUFFICIENT_DISK_SPACE_ERROR__S_S_LLU]
message = Cannot write data to index path '%s' because you are low
on disk space on partition '%s'. Indexing has been paused.
action = Free disk space above %lluMB to resume indexing.
severity = warn
capabilities = indexes_edit
help = learnmore.indexer.setlimits
[LM_LICENSE]
name = License Manager
[LM_LICENSE:EXPIRED_STATUS__LD]
message = Your license has expired as of $t%ld.
action = $CONTACT_SPLUNK_SALES_TEXT$
capabilities = license_edit
[LM_LICENSE:EXPIRING_STATUS__LD]
message = Your license will soon expire on $t%ld.
action = $CONTACT_SPLUNK_SALES_TEXT$
capabilities = license_edit
[LM_LICENSE:INDEXING_LIMIT_EXCEEDED]
573
message = Daily indexing volume limit exceeded today.
action = See [[/manager/search/licenseusage|License Manager]] for
details.
severity = warn
capabilities = license_view_warnings
help = learnmore.license.features
[LM_LICENSE:MASTER_CONNECTION_ERROR__S_LD_LD]
message = Failed to contact license master: reason='%s', first
failure time=%ld ($t%ld).
severity = warn
capabilities = license_edit
help = learnmore.license.features
[LM_LICENSE:SLAVE_WARNING__LD_S]
message = License warning issued within past 24 hours: $t%ld.
action = Please refer to the License Usage Report view on license
master '%s' to find out more.
severity = warn
capabilities = license_edit
help = learnmore.license.features
multikv.conf
The following are the spec and example files for multikv.conf.
multikv.conf.spec
# Version 7.2.1
#
# This file contains possible attribute and value pairs for creating
multikv
# rules. Multikv is the process of extracting events from table-like
events,
# such as the output of top, ps, ls, netstat, etc.
#
# There is NO DEFAULT multikv.conf. To set custom configurations,
place a
# multikv.conf in $SPLUNK_HOME/etc/system/local/. For examples, see
# multikv.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) see
the
# documentation located at
#
574
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# NOTE: Only configure multikv.conf if Splunk's default multikv behavior
does
# not meet your needs.
[<multikv_config_name>]
* Name of the stanza to use with the multikv search command, for
example:
'| multikv conf=<multikv_config_name> rmorig=f | ....'
* Follow this stanza name with any number of the following
attribute/value pairs.
Section Definition
OR
575
* A line membership test.
* Member if lines match the regex.
OR
Section processing
576
Set to
false/0 if you want consecutive delimiters to be
treated
as empty values. Defaults to true.
multikv.conf.example
# Version 7.2.1
#
# This file contains example multi key/value extraction configurations.
#
# To use one or more of these configurations, copy the configuration
block into
# multikv.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Sample output:
577
# 29960 mdimport 0.0% 0:00.29 3 60 50 1.10M 2.55M 3.54M 38.7M
# 29905 pickup 0.0% 0:00.01 1 16 17 164K 832K 764K
26.7M
#....
[top_mkv]
# pre table starts at "Process..." and ends at line containing "PID"
pre.start = "Process"
pre.end = "PID"
pre.ignore = _all_
# table body ends at the next "Process" line (ie start of another top)
tokenize
# and inherit the number of tokens from previous section (header)
body.end = "Process"
body.tokens = _tokenize_, 0, " "
[ls-lah-cpp]
pre.start = "total"
pre.linecount = 1
# ignore dirs
body.ignore = _regex_ "^drwx.*",
body.tokens = _tokenize_, 0, " "
578
outputs.conf
The following are the spec and example files for outputs.conf.
outputs.conf.spec
# Version 7.2.1
#
# Forwarders require outputs.conf. Splunk instances that do not forward
# do not use it. Outputs.conf determines how the forwarder sends data
to
# receiving Splunk instances, either indexers or other forwarders.
#
# To configure forwarding, create an outputs.conf file in
# $SPLUNK_HOME/etc/system/local/. For examples of its use, see
# outputs.conf.example.
#
# You must restart Splunk software to enable configurations.
#
# To learn more about configuration files (including precedence) see
the topic
# "About Configuration Files" in the Splunk Documentation set.
#
# To learn more about forwarding, see the topic "About forwarding and
# receiving data" in the Splunk Enterprise Forwarding manual.
GLOBAL SETTINGS
579
# * Do not use the 'sslPassword', 'socksPassword', or 'token'
settings
# to set passwords in this stanza as they may remain readable to
# attackers, specify these settings in the [tcpout] stanza instead.
[tcpout]
indexAndForward = <boolean>
580
* Set to "true" to index all data locally, in addition to forwarding
it.
* This is known as an "index-and-forward" configuration.
* This setting is only available for heavy forwarders.
* This setting is only available at the top level [tcpout] stanza. It
cannot be overridden in a target group.
* Default: false
[tcpout:<target_group>]
blockWarnThreshold = <integer>
* The output pipeline send failure count threshold after which a
failure message appears as a banner in Splunk Web.
* Optional.
* To disable Splunk Web warnings on blocked output queue conditions, set
this
to a large value (for example, 2000000).
581
* Default: 100
indexerDiscovery = <name>
* The name of the master node to use for indexer discovery.
* Instructs the forwarder to fetch the list of indexers from the master
node
specified in the corresponding [indexer_discovery:<name>] stanza.
* No default.
token = <string>
* The access token for receiving data.
* Optional.
* If you configured an access token for receiving data from a forwarder,
Splunk software populates that token here.
* If you configured a receiver with an access token and that token is
not
specified here, the receiver rejects all data sent to it.
* No default.
[tcpout-server://<ip address>:<port>]
* Optional. There is no requirement to have a [tcpout-server] stanzas.
TCPOUT SETTINGS
# These settings are optional and can appear in any of the three stanza
levels.
[tcpout<any of above>]
#----General Settings----
sendCookedData = <boolean>
* Whether to send processed or unprocessed data to the receiving server.
* If "true", events are cooked (have been processed by Splunk software).
* If "false", events are raw and untouched prior to sending.
* Set to "false" if you are sending events to a third-party system.
* Default: false
heartbeatFrequency = <integer>
* How often (in seconds) to send a heartbeat packet to the receiving
server.
582
* This setting is a mechanism for the forwarder to know that the
receiver
(indexer) is alive. If the indexer does not send a return packet to
the
forwarder, the forwarder declares the receiver unreachable and does
not
forward data to it.
* The forwarder only sends heartbeats if the 'sendCookedData' setting
is set to "true".
* Default: 30
blockOnCloning = <boolean>
* Whether or not the TcpOutputProcessor should wait until at least one
of the cloned output groups receives events before attempting to send
more events.
* If "true", the TcpOutputProcessor blocks until at least one of the
cloned groups receives events. It does not drop events when all the
cloned groups are down.
* If "false", the TcpOutputProcessor drops events when all the cloned
groups
are down and all queues for the cloned groups are full. When at least
one of
the cloned groups is up and queues are not full, the events are not
dropped.
* Default: true
blockWarnThreshold = <integer>
* The output pipeline send failure count threshold, after which a
failure message appears as a banner in Splunk Web.
* Optional.
* To disable Splunk Web warnings on blocked output queue conditions, set
this
to a large value (for example, 2000000).
* Default: 100
compressed = <boolean>
* If "true", the receiver communicates with the forwarder in compressed
format.
* If "true", you do not need to set the 'compressed' setting to "true"
in the inputs.conf file on the receiver.
* This setting applies to non-SSL forwarding only. For SSL forwarding,
Splunk software uses the 'useClientSSLCompression' setting.
* Default: false
583
* Default (if 'negotiateNewProtocol' is "true"): 1
* Default (if 'negotiateNewProtocol' is not "true"): 0
negotiateNewProtocol = <boolean>
* Sets the default value of the 'negotiateProtocolLevel' setting.
* DEPRECATED. Set 'negotiateProtocolLevel' instead.
* Default: true
channelReapInterval = <integer>
* How often, in milliseconds, channel codes are reaped, or made
available for re-use.
* This value sets the minimum time between reapings. In practice,
consecutive reapings might be separated by greater than the number of
milliseconds specified here.
* Default: 60000 (1 minute)
channelTTL = <integer>
* How long, in milliseconds, a channel can remain "inactive" before
it is reaped, or before its code is made available for reuse by a
different channel.
* Default: 300000 (5 minutes)
channelReapLowater = <integer>
* If the number of active channels is greater than
'channelReapLowater',
Splunk software reaps old channels to make their channel codes
available
for reuse.
* If the number of active channels is less than 'channelReapLowater',
Splunk software does not reap channels, no matter how old they are.
* This value essentially determines how many active-but-old channels
Splunk
software keeps "pinned" in memory on both sides of a Splunk-to-Splunk
connection.
* A non-zero value helps ensure that Splunk software does not waste
network
resources by "thrashing" channels in the case of a forwarder sending
a trickle of data.
* Default: 10
socksServer = [<ip>|<servername>]:<port>
* The IP address or servername of the Socket Secure version 5 (SOCKS5)
server.
* Required.
* This setting specifies the port on which the SOCKS5 server is
listening.
* After you configure and restart the forwarder, it connects to the
SOCKS5
proxy host, and optionally authenticates to the server on demand if
you provide credentials.
* NOTE: Only SOCKS5 servers are supported.
* No default.
584
socksUsername = <username>
* The SOCKS5 username to use when authenticating against the SOCKS5
server.
* Optional.
socksPassword = <password>
* The SOCKS5 password to use when authenticating against the SOCKS5
server.
* Optional.
socksResolveDNS = <boolean>
* Whether or not the forwarder should rely on the SOCKS5 proxy server
Domain
Name Server (DNS) to resolve hostnames of indexers in the output
group it is
forwarding data to.
* If "true", the forwarder sends the hostnames of the indexers to the
SOCKS5 server, and lets the SOCKS5 server do the name resolution. It
does not attempt to resolve the hostnames on its own.
* If "false", the forwarder attempts to resolve the hostnames of the
indexers through DNS on its own.
* Optional.
* Default: false
#----Queue Settings----
maxQueueSize = [<integer>|<integer>[KB|MB|GB]|auto]
* The maximum size of the forwarder output queue.
* The size can be limited based on the number of entries, or on the
total
memory used by the items in the queue.
* If specified as a lone integer (for example, "maxQueueSize=100"),
the 'maxQueueSize' setting indicates the maximum count of queued
items.
* If specified as an integer followed by KB, MB, or GB
(for example, maxQueueSize=100MB), the 'maxQueueSize' setting
indicates
the maximum random access memory (RAM) size of all the items in the
queue.
* If set to "auto", this setting configures a value for the output queue
depending on the value of the 'useACK' setting:
* If 'useACK' is set to "false", the output queue uses 500KB.
* If 'useACK' is set to "true", the output queue uses 7MB.
* If you enable indexer acknowledgment by configuring the 'useACK'
setting to "true", the forwarder creates a wait queue where it
temporarily
stores data blocks while it waits for indexers to acknowledge the
receipt
of data it previously sent.
* The forwarder sets the wait queue size to triple the value of what
you set for 'maxQueueSize.'
585
* For example, if you set "maxQueueSize=1024KB" and "useACK=true",
then the output queue is 1024KB and the wait queue is 3072KB.
* Although the wait queue and the output queue sizes are both
controlled
by this setting, they are separate.
* The wait queue only exists if 'useACK' is set to "true".
* Limiting the queue sizes by quantity is historical. However,
if you configure queues based on quantity, keep the following in mind:
* Queued items can be events or blocks of data.
* Non-parsing forwarders, such as universal forwarders, send
blocks, which can be up to 64KB.
* Parsing forwarders, such as heavy forwarders, send events, which
are the size of the events. Some events are as small as
a few hundred bytes. In unusual cases (data dependent), you might
arrange to produce events that are multiple megabytes.
* Default: auto
* if 'useACK' is set to "true" and this setting is set to "auto", then
the output queue is 7MB and the wait queue is 21MB.
dropEventsOnQueueFull = <integer>
* The number of seconds to wait before the output queue throws out all
new events until it has space.
* If set to a positive number, the queue waits 'dropEventsonQueueFull'
seconds before throwing out all new events.
* If set to -1 or 0, the output queue blocks when it is full. This
further
blocks events up the processing chain.
* If any target group queue is blocked, no more data reaches any other
target group.
* Using auto load-balancing is the best way to minimize this condition.
In this case, multiple receivers must be down (or jammed up) before
queue blocking can occur.
* CAUTION: DO NOT SET THIS VALUE TO A POSITIVE INTEGER IF YOU ARE
MONITORING FILES.
* Default: -1
dropClonedEventsOnQueueFull = <integer>
* The amount of time, in seconds, to wait before dropping events from
the group.
* If set to a positive number, the queue does not block completely, but
waits up to 'dropClonedEventsOnQueueFull' seconds to queue events to
a
group.
* If it cannot queue to a group for more than
'dropClonedEventsOnQueueFull'
seconds, it begins dropping events from the group. It makes sure
that at
least one group in the cloning configuration can receive events.
* The queue blocks if it cannot deliver events to any of the cloned
groups.
* If set to -1, the TcpOutputProcessor ensures that each group
receives all of the events. If one of the groups is down, the
586
TcpOutputProcessor blocks everything.
* Default: 5
#######
# Backoff Settings When Unable To Send Events to Indexer
# The settings in this section determine forwarding behavior when there
are
# repeated failures in sending events to an indexer ("sending
failures").
#######
maxFailuresPerInterval = <integer>
* The maximum number of failures allowed per interval before a forwarder
applies backoff (stops sending events to the indexer for a specified
number of seconds). The interval is defined in the
'secsInFailureInterval'
setting below.
* Default: 2
secsInFailureInterval = <integer>
* The number of seconds contained in a failure interval.
* If the number of write failures to the indexer exceeds
'maxFailuresPerInterval' in the specified 'secsInFailureInterval'
seconds,
the forwarder applies backoff.
* The backoff time period range is 1-10 * 'autoLBFrequency'.
* Default: 1
maxConnectionsPerIndexer = <integer>
* The maximum number of allowed connections per indexer.
* In the presence of failures, the maximum number of connection attempts
per indexer at any point in time.
* Default: 2
connectionTimeout = <integer>
* The time to wait, in seconds, for a forwarder to establish a
connection
with an indexer.
* The connection times out if an attempt to establish a connection
with an indexer does not complete in 'connectionTimeout' seconds.
* Default: 20
readTimeout = <integer>
* The time to wait, in seconds, for a forwarder to read from a socket it
has
created with an indexer.
* The connection times out if a read from a socket does not complete in
587
'readTimeout' seconds.
* This timeout is used to read acknowledgment when indexer
acknowledgment is
enabled (when you set 'useACK' to "true").
* Default: 300 seconds (5 minutes)
writeTimeout = <integer>
* The time to wait, in seconds, for a forwarder to complete a write to a
socket it has created with an indexer.
* The connection times out if a write to a socket does not finish in
'writeTimeout' seconds.
* Default: 300 seconds (5 minutes)
tcpSendBufSz = <integer>
* The size of the TCP send buffer, in bytes.
* Only use this setting if you are a TCP/IP expert.
* Useful to improve throughput with small events, like Windows events.
* Default: the system default
ackTimeoutOnShutdown = <integer>
* The time to wait, in seconds, for the forwarder to receive indexer
acknowledgments during a forwarder shutdown.
* The connection times out if the forwarder does not receive indexer
acknowledgements (ACKs) in 'ackTimeoutOnShutdown' seconds during
forwarder shutdown.
* Default: 30 seconds
dnsResolutionInterval = <integer>
* The base time interval, in seconds, at which indexer Domain Name
Server
(DNS) names are resolved to IP addresses.
* This is used to compute runtime dnsResolutionInterval as follows:
Runtime interval = 'dnsResolutionInterval' + (number of indexers in
server settings - 1) * 30.
* The DNS resolution interval is extended by 30 seconds for each
additional
indexer in the server setting.
* Default: 300 seconds (5 minutes)
forceTimebasedAutoLB = <boolean>
* Forces existing data streams to switch to a newly elected indexer
every
auto load balancing cycle.
* On universal forwarders, use the 'EVENT_BREAKER_ENABLE' and
'EVENT_BREAKER' settings in props.conf rather than
'forceTimebasedAutoLB'
for improved load balancing, line breaking, and distribution of
events.
* Default: false
588
# This filter does not work if it is created under any other stanza.
forwardedindex.<n>.whitelist = <regex>
forwardedindex.<n>.blacklist = <regex>
* These filters determine which events get forwarded to the index,
based on the indexes the events are targeted to.
* An ordered list of whitelists and blacklists, which together
decide if events are forwarded to an index.
* The order is determined by <n>. <n> must start at 0 and continue with
positive integers, in sequence. There cannot be any gaps in the
sequence.
* For example:
forwardedindex.0.whitelist, forwardedindex.1.blacklist,
forwardedindex.2.whitelist, ...
* The filters can start from either whitelist or blacklist. They are
tested
from forwardedindex.0 to forwardedindex.<max>.
* If both forwardedindex.<n>.whitelist and forwardedindex.<n>.blacklist
are
present for the same value of n, then forwardedindex.<n>.whitelist is
honored. forwardedindex.<n>.blacklist is ignored in this case.
* In general, you do not need to change these filters from their default
settings in $SPLUNK_HOME/system/default/outputs.conf.
* Filtered out events are not indexed if you do not enable local
indexing.
forwardedindex.filter.disable = <boolean>
* Whether or not index filtering is active.
* If "true", disables index filtering. Events for all indexes are then
forwarded.
* Default: false
#----Automatic Load-Balancing
# Automatic load balancing is the only way to forward data.
# Round-robin method of load balancing is no longer supported.
autoLBFrequency = <integer>
* The amount of time, in seconds, that a forwarder sends data to an
indexer
before redirecting outputs to another indexer in the pool.
* Use this setting when you are using automatic load balancing of
outputs
from universal forwarders (UFs).
* Every 'autoLBFrequency' seconds, a new indexer is selected randomly
from the
list of indexers provided in the server setting of the target group
stanza.
* Default: 30
autoLBVolume = <integer>
* The volume of data, in bytes, to send to an indexer before a new
indexer
589
is randomly selected from the list of indexers provided in the server
setting of the target group stanza.
* This setting is closely related to the 'autoLBFrequency' setting.
The forwarder first uses 'autoLBVolume' to determine if it needs to
switch to another indexer. If the 'autoLBVolume' is not reached,
but the 'autoLBFrequency' is, the forwarder switches to another
indexer as the forwarding target.
* A non-zero value means that volume-based forwarding is active.
* 0 means the volume-based forwarding is not active.
* Default: 0
useSSL = <true|false|legacy>
* Whether or not the forwarder uses SSL to connect to the receiver, or
relies
on the 'clientCert' setting to be active for SSL connections.
* You do not need to set 'clientCert' if 'requireClientCert' is set to
"false" on the receiver.
* If "true", then the forwarder uses SSL to connect to the receiver.
* If "false", then the forwarder does not use SSL to connect to the
receiver.
* If "legacy", then the forwarder uses the 'clientCert' property to
determine whether or not to use SSL to connect.
* Default: legacy
sslPassword = <password>
* The password associated with the CAcert.
* The default Splunk CAcert uses the password "password".
* No default.
clientCert = <path>
* The full path to the client SSL certificate in Privacy Enhanced Mail
(PEM)
format.
* If you have not set 'useSSL', then this connection uses SSL if and
only if
you specify this setting with a valid client SSL certificate file.
* No default.
sslCertPath = <path>
* The full path to the client SSL certificate.
* DEPRECATED.
* Use the 'clientCert' setting instead.
cipherSuite = <string>
* The specified cipher string for the input processors.
590
* This setting ensures that the server does not accept connections using
weak
encryption protocols.
* The default can vary. See the 'cipherSuite' setting in
$SPLUNK_HOME/etc/system/default/outputs.conf for the current default.
sslCipher = <string>
* The specified cipher string for the input processors.
* DEPRECATED.
* Use the 'cipherSuite' setting instead.
sslRootCAPath = <path>
* The full path to the root Certificate Authority (CA) certificate
store.
* DEPRECATED.
* Use the 'server.conf/[sslConfig]/sslRootCAPath' setting instead.
* Used only if 'sslRootCAPath' in server.conf is not set.
* The <path> must refer to a Privacy Enhanced Mail (PEM) format file
containing one or more root CA certificates concatenated together.
* No default.
sslVerifyServerCert = <boolean>
* Serves as an additional step for authenticating your indexers.
* If "true", ensure that the server you are connecting to has a valid
SSL certificate. Note that certificates with the same Common Name as
the CA's certificate will fail this check.
* Both the common name and the alternate name of the server are then
checked
for a match.
* Default: false
tlsHostname = <string>
* A Transport Security Layer (TSL) extension that allows sending an
identifier
with SSL Client Hello.
* Default: empty string
591
sslCommonNameToCheck = <commonName1>, <commonName2>, ...
* Checks the Common Name of the server's certificate against the names
listed here.
* Optional.
* The Common Name identifies the host name associated with the
certificate.
For example, example www.example.com or example.com
* If there is no match, assume that Splunk software is not authenticated
against this server.
* You must set the 'sslVerifyServerCert' setting to "true" for this
setting
to work.
* Default: empty string (no common name checking).
useClientSSLCompression = <boolean>
* Enables compression on SSL.
* Default: The value of
'server.conf/[sslConfig]/useClientSSLCompression'
sslQuietShutdown = <boolean>
* Enables quiet shutdown mode in SSL.
* Default: false
592
# Indexer acknowledgment ensures that forwarded data is reliably
delivered
# to the receiver.
#
# If the receiver is an indexer, it indicates that the indexer has
received
# the data, indexed it, and written it to the file system. If the
receiver
# is an intermediate forwarder, it indicates that the intermediate
forwarder
# has successfully forwarded the data to the terminating indexer and has
# received acknowledgment from that indexer.
#
# Indexer acknowledgment is a complex feature that requires
# careful planning. Before using it, read the online topic describing it
in
# the Splunk Enterprise Distributed Deployment manual.
useACK = <boolean>
* Whether or not to use indexer acknowledgment.
* Indexer acknowledgment is an optional capability on forwarders that
helps
prevent loss of data when sending data to an indexer.
* When set to "true", the forwarder retains a copy of each sent event
until the receiving system sends an acknowledgment.
* The receiver sends an acknowledgment when it has fully handled the
event
(typically when it has written it to disk in indexing).
* If the forwarder does not receive an acknowledgment, it resends the
data
to an alternative receiver.
* NOTE: The maximum memory used for the outbound data queues increases
significantly by default (500KB -> 28MB) when the 'useACK' setting
is
enabled. This is intended for correctness and performance.
* When set to "false", the forwarder considers the data fully processed
when it finishes writing it to the network socket.
* You can configure this setting at the [tcpout] or
[tcpout:<target_group>]
stanza levels. You cannot set it for individual servers at the
[tcpout-server: ...] stanza level.
* Default: false
Syslog output----
593
[syslog]
type = [tcp|udp]
priority = <<integer>> | NO_PRI
maxEventSize = <integer>
[syslog:<target_group>]
#----REQUIRED SETTINGS----
# The following settings are required for a syslog output group.
server = [<ip>|<servername>]:<port>
* The IP address or servername where the syslog server is running.
* Required.
* This setting specifies the port on which the syslog server listens.
* Default: 514
#----OPTIONAL SETTINGS----
type = [tcp|udp]
* The network protocol to use.
* Default: udp
priority = <<integer>>|NO_PRI
* The priority value included at the beginning of each syslog message.
* The priority value ranges from 0 to 191 and is made up of a Facility
value and a Level value.
* Enclose the priority value in "<>" delimeters. For example, specify a
priority of 34 as follows: <34>
* The integer must be one to three digits in length.
* The value you enter appears in the syslog header.
* The value mimics the number passed by a syslog interface call. See
the
*nix man page for syslog for more information.
* Calculate the priority value as follows: Facility * 8 + Severity
For example, if Facility is 4 (security/authorization messages)
and Severity is 2 (critical conditions), the priority will be
(4 * 8) + 2 = 34. Set the setting to <34>.
* If you do not want to add a priority value, set the priority to
"<NO_PRI>".
* The table of facility and severity (and their values) is located in
RFC3164. For example, http://www.ietf.org/rfc/rfc3164.txt section
4.1.1
* The table is reproduced briefly below. Some values are outdated.
Facility:
594
0 kernel messages
1 user-level messages
2 mail system
3 system daemons
4 security/authorization messages
5 messages generated internally by syslogd
6 line printer subsystem
7 network news subsystem
8 UUCP subsystem
9 clock daemon
10 security/authorization messages
11 FTP daemon
12 NTP subsystem
13 log audit
14 log alert
15 clock daemon
16 local use 0 (local0)
17 local use 1 (local1)
18 local use 2 (local2)
19 local use 3 (local3)
20 local use 4 (local4)
21 local use 5 (local5)
22 local use 6 (local6)
23 local use 7 (local7)
Severity:
0 Emergency: system is unusable
1 Alert: action must be taken immediately
2 Critical: critical conditions
3 Error: error conditions
4 Warning: warning conditions
5 Notice: normal but significant condition
6 Informational: informational messages
7 Debug: debug-level messages
* Default: <13> (Facility of "user" and Severity of "Notice")
syslogSourceType = <string>
* Specifies an additional rule for handling data, in addition to that
provided by the 'syslog' source type.
* This string is used as a substring match against the sourcetype key.
For
example, if the string is set to "syslog", then all sourcetypes
containing the string 'syslog' receive this special treatment.
* To match a sourcetype explicitly, use the pattern
"sourcetype::sourcetype_name".
* Example: syslogSourceType = sourcetype::apache_common
* Data that is "syslog" or matches this setting is assumed to already be
in
syslog format.
* Data that does not match the rules has a header, optionally a
timestamp
(if defined in 'timestampformat'), and a hostname added to the front
of
595
the event. This is how Splunk software causes arbitrary log data to
match syslog expectations.
* No default.
timestampformat = <format>
* If specified, Splunk software prepends formatted timestamps to events
forwarded to syslog.
* As above, this logic is only applied when the data is not syslog, or
the
type specified in the 'syslogSourceType' setting, because it is
assumed
to already be in syslog format.
* If the data is not in syslog-compliant format and you do not specify
a
'timestampformat', the output will not be RFC3164-compliant.
* The format is a strftime (string format time)-style timestamp
formatting
string. This is the same implementation used in the 'eval' search
command,
Splunk logging, and other places in splunkd.
* For example: %b %e %H:%M:%S for RFC3164-compliant output
* %b - Abbreviated month name (Jan, Feb, ...)
* %e - Day of month
* %H - Hour
* %M - Minute
* %s - Second
* For a more exhaustive list of the formatting specifiers, refer to the
online documentation.
* Do not put the string in quotes.
* No default. No timestamp is added to the front of events.
maxEventSize = <integer>
* The maximum size of an event, in bytes, that Splunk software will
transmit.
* All events exceeding this size are truncated.
* Optional.
* Default: 1024
596
* <spec> can be:
* <sourcetype>, the source type of an event
* host::<host>, where <host> is the host for an event
* source::<source>, where <source> is the source for an event
IndexAndForward Processor-----
597
# set 'index' in the [indexAndForward] stanza described below, it
# supersedes any value set in [tcpout].
[indexAndForward]
index = <boolean>
* Turns indexing on or off on a Splunk instance.
* If "true", the Splunk instance indexes data.
* If "false", the Splunk instance does not index data.
* The default can vary. It depends on whether the Splunk
instance is configured as a forwarder, and whether it is
modified by any value configured for the indexAndForward
setting in [tcpout].
selectiveIndexing = <boolean>
* If "true", you can choose to index only specific events that have
the '_INDEX_AND_FORWARD_ROUTING' setting configured.
* Configure the '_INDEX_AND_FORWARD_ROUTING' setting in inputs.conf as:
[<input_stanza>]
_INDEX_AND_FORWARD_ROUTING = local
* Default: false
[indexer_discovery:<name>]
pass4SymmKey = <string>
* The security key used to communicate between the cluster master
and the forwarders.
* This value must be the same for all forwarders and the master node.
* You must explicitly set this value for each forwarder.
* If you specify a password here, you must also specify the same
password
on the master node identified by the 'master_uri' setting.
send_timeout = <seconds>
* Low-level timeout for sending messages to the master node.
* Fractional seconds are allowed (for example, 60.95 seconds).
* Default: 30
rcv_timeout = <seconds>
* Low-level timeout for receiving messages from the master node.
* Fractional seconds are allowed (for example, 60.95 seconds).
* Default: 30
cxn_timeout = <seconds>
* Low-level timeout for connecting to the master node.
* Fractional seconds are allowed (for example, 60.95 seconds).
* Default: 30
master_uri = <uri>
* The URI and management port of the cluster master used in indexer
discovery.
* For example, https://SplunkMaster01.example.com:8089
598
Remote Queue Output
[remote_queue:<name>]
remote_queue.* = <string>
* A way to pass configuration information to a remote storage system.
* Optional.
* With remote queues, communication between the forwarder and the remote
queue
system might require additional configuration, specific to the type of
remote
queue. You can pass configuration information to the storage system by
specifying this settings through the following schema:
remote_queue.<scheme>.<config-variable> = <value>.
For example:
remote_queue.sqs.access_key = ACCESS_KEY
remote_queue.type = sqs|kinesis
* Currently not supported. This setting is related to a feature that is
still under development.
* Required.
* Specifies the remote queue type, either SQS or Kinesis.
compressed = <boolean>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
channelReapInterval = <integer>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
channelTTL = <integer>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
channelReapLowater = <integer>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
599
Simple Queue Service (SQS) specific settings
remote_queue.sqs.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The access key to use when authenticating with the remote queue
system that supports the SQS API.
* If not specified, the forwarder looks for the environment variables
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the
environment
variables are not set and the forwarder is running on EC2, the
forwarder
attempts to use the secret key from the IAM (Identity and Access
Management) role.
* Default: not set
remote_queue.sqs.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the secret key to use when authenticating with the remote
queue
system supporting the SQS API.
* If not specified, the forwarder looks for the environment variables
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the
environment
variables are not set and the forwarder is running on EC2, the
forwarder
attempts to use the secret key from the IAM (Identity and Access
Management) role.
* Default: not set
remote_queue.sqs.auth_region = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The authentication region to use when signing the requests while
interacting
with the remote queue system supporting the Simple Queue Service
(SQS) API.
* If not specified and the forwarder is running on EC2, the auth_region
is
constructed automatically based on the EC2 region of the instance
where the
the forwarder is running.
* Default: not set
remote_queue.sqs.endpoint = <URL>
600
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote queue system supporting the Simple Queue Service
(SQS) API.
* Use the scheme, either http or https, to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://sqs.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified via the 'remote_queue.sqs.auth_region'
setting
or a value constructed automatically based on the EC2 region of the
running instance.
* Example: https://sqs.us-west-2.amazonaws.com/
remote_queue.sqs.message_group_id = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the Message Group ID for Amazon Web Services Simple Queue
Service
(SQS) First-In, First-Out (FIFO) queues.
* Setting a Message Group ID controls how messages within an AWS SQS
queue are
processed.
* For information on SQS FIFO queues and how messages in those queues
are
processed, see "Recommendations for FIFO queues" in the AWS SQS
Developer
Guide.
* If you configure this setting, Splunk software assumes that the SQS
queue is
a FIFO queue, and that messages in the queue should be processed
first-in,
first-out.
* Otherwise, Splunk software assumes that the SQS queue is a standard
queue.
* Can be between 1-128 alphanumeric or punctuation characters.
* NOTE: FIFO queues must have Content-Based De-duplication enabled.
* Default: not set
remote_queue.sqs.retry_policy = max_count|none
* Sets the retry policy to use for remote queue operations.
* Optional.
* A retry policy specifies whether and how to retry file operations
that fail
for those failures that might be intermittent.
* Retry policies:
601
+ "max_count": Imposes a maximum number of times a queue operation is
retried upon intermittent failure. Set max_count with the
'max_count.max_retries_per_part' setting.
+ "none": Do not retry file operations upon failure.
* Default: max_count
remote_queue.sqs.large_message_store.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote storage system supporting the S3 API.
* Use the scheme, either http or https, to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
602
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified via 'remote_queue.sqs.auth_region' or a
value
constructed automatically based on the EC2 region of the running
instance.
* Example: https://s3-us-west-2.amazonaws.com/
* Default: not set
remote_queue.sqs.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The remote storage location where messages larger than the underlying
queue's maximum message size will reside.
* The format for this value is: <scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific
string for
identifying a location inside the storage system.
* The following external systems are supported:
* Object stores that support AWS's S3 protocol. These stores use the
scheme
"s3". For example, "path=s3://mybucket/some/path".
* If not specified, the queue drops messages exceeding the underlying
queue's
maximum message size.
* Default: not set
remote_queue.sqs.send_interval = <number><unit>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The interval that the remote queue output processor waits for data to
arrive before sending a partial batch to the remote queue.
* Examples: 30s, 1m
* Default: 30s
remote_queue.sqs.max_queue_message_size = <integer>[KB|MB|GB]
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The maximum message size to which events are batched for upload to
the remote queue.
* Specify this value as an integer followed by KB, MB, or GB (for
example,
10MB is 10 megabytes)
* Queue messages are sent to the remote queue when the next event
processed
??would otherwise result in a message exceeding the maximum message
size.
* The maximum value for this setting is 5GB.
603
* Default: 10MB
remote_queue.sqs.enable_data_integrity_checks = <boolean>
* If "true", Splunk software sets the data checksum in the metadata
field of
the HTTP header during upload operation to S3.
* The checksum is used to verify the integrity of the data on uploads.
* Default: false
remote_queue.sqs.enable_signed_payloads = <boolean>
* If "true", Splunk software signs the payload during upload operation
to S3.
* This setting is valid only for remote.s3.signature_version = v4
* Default: true
remote_queue.kinesis.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the access key to use when authenticating with the remote
queue
system supporting the Kinesis API.
* If not specified, the forwarder looks for the environment variables
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the
environment
variables are not set and the forwarder is running on EC2, the
forwarder
attempts to use the secret key from the IAM role.
* Default: not set
remote_queue.kinesis.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the secret key to use when authenticating with the remote
queue
system supporting the Kinesis API.
* If not specified, the forwarder looks for the environment variables
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the
environment
variables are not set and the forwarder is running on EC2, the
forwarder
attempts to use the secret key from the IAM role.
* Default: not set
remote_queue.kinesis.auth_region = <string>
604
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The authentication region to use when signing the requests when
interacting
with the remote queue system supporting the Kinesis API.
* If not specified and the forwarder is running on EC2, the auth_region
is
constructed automatically based on the EC2 region of the instance
where the
the forwarder is running.
* Default: not set
remote_queue.kinesis.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote queue system supporting the Kinesis API.
* Use the scheme, either http or https, to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://kinesis.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
is
either a value specified via the 'remote_queue.kinesis.auth_region'
setting
or a value constructed automatically based on the EC2 region of the
running instance.
* Example: https://kinesis.us-west-2.amazonaws.com/
remote_queue.kinesis.enable_data_integrity_checks = <boolean>
* If "true", Splunk software sets the data checksum in the metadata
field
of the HTTP header during upload operation to S3.
* The checksum is used to verify the integrity of the data on uploads.
* Default: false
remote_queue.kinesis.enable_signed_payloads = <boolean>
* If "true", Splunk software signs the payload during upload operation
to S3.
* This setting is valid only for remote.s3.signature_version = v4
* Default: true
remote_queue.kinesis.retry_policy = max_count|none
* Sets the retry policy to use for remote queue operations.
* Optional.
* A retry policy specifies whether and how to retry file operations
that fail
for those failures that might be intermittent.
* Retry policies:
605
+ "max_count": Imposes a maximum number of times a queue operation is
retried upon intermittent failure. Specify the max_count with the
'max_count.max_retries_per_part' setting.
+ "none": Do not retry file operations upon failure.
* Default: max_count
remote_queue.kinesis.max_count.max_retries_per_part = <unsigned
integer>
* When the 'remote_queue.kinesis.retry_policy' setting is max_count,
sets the maximum number of times a queue operation is retried
upon intermittent failure.
* Optional.
* Default: 9
remote_queue.kinesis.large_message_store.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote storage system supporting the S3 API.
* Use the scheme, either http or https, to enable or disable SSL
connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on
the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which
606
is
either a value specified via 'remote_queue.kinesis.auth_region' or a
value
constructed automatically based on the EC2 region of the running
instance.
* Example: https://s3-us-west-2.amazonaws.com/
* Default: not set
remote_queue.kinesis.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The remote storage location where messages larger than the underlying
queue's maximum message size will reside.
* The format for this setting is:
<scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific
string for
identifying a location inside the storage system.
* The following external systems are supported:
* Object stores that support AWS's S3 protocol. These stores use the
scheme "s3".
For example, "path=s3://mybucket/some/path".
* If not specified, the queue drops messages exceeding the underlying
queue's
maximum message size.
* Default: not set
remote_queue.kinesis.send_interval = <number><unit>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The interval that the remote queue output processor waits for data to
arrive before sending a partial batch to the remote queue.
* For example, 30s, 1m
* Default: 30s
remote_queue.kinesis.max_queue_message_size = <integer>[KB|MB|GB]
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The maximum message size to which events are batched for upload to the
remote
queue.
* Specify this value as an integer followed by KB or MB (for example,
500KB
is 500 kilobytes).
* Queue messages are sent to the remote queue when the next event
processed
??would otherwise result in the message exceeding the maximum message
size.
607
* The maximum value for this setting is 5GB.
* Default: 10MB
outputs.conf.example
# Version 7.2.1
#
# This file contains an example outputs.conf. Use this file to
configure
# forwarding in a distributed set up.
#
# To use one or more of these configurations, copy the configuration
block into
# outputs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[tcpout:group1]
server=10.1.1.197:9997
[tcpout:group2]
server=myhost.Splunk.com:9997
608
[tcpout:group3]
server=myhost.Splunk.com:9997,10.1.1.197:6666
[tcpout:group4]
server=foo.Splunk.com:9997
heartbeatFrequency=45
maxQueueSize=100500
# Clone events to groups indexer1 and indexer2. Also, index all this
data
# locally as well.
[tcpout]
indexAndForward=true
[tcpout:indexer1]
server=Y.Y.Y.Y:9997
[tcpout:indexer2]
server=X.X.X.X:6666
[tcpout:indexer1]
server=A.A.A.A:1111, B.B.B.B:2222
[tcpout:indexer2]
server=C.C.C.C:3333, D.D.D.D:4444
609
# syslog host in syslog-compliant format:
[syslog:syslog-out1]
disabled = false
server = X.X.X.X:9099
type = tcp
priority = <34>
timestampformat = %b %e %H:%M:%S
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = splunkLB.example.com:4433
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
# SSL
#
# This example sends events to an indexer via SSL using splunk's
# self signed cert:
610
[tcpout]
server = splunkServer.example.com:4433
sslPassword = password
clientCert = $SPLUNK_HOME/etc/auth/server.pem
#
# The following example shows how to route events to syslog server
# This is similar to tcpout routing, but DEST_KEY is set to
_SYSLOG_ROUTING
#
[syslog]
TRANSFORMS-routing=syslogRouting
[syslogRouting]
REGEX=.
DEST_KEY=_SYSLOG_ROUTING
FORMAT=syslogGroup
[syslog:syslogGroup]
server = 10.1.1.197:9997
[syslog:errorGroup]
server=10.1.1.200:9999
[syslog:everythingElseGroup]
server=10.1.1.250:6666
#
# Perform selective indexing and forwarding
#
# With a heavy forwarder only, you can index and store data locally, as
611
well as
# forward the data onwards to a receiving indexer. There are two ways to
do
# this:
# 1. In outputs.conf:
[tcpout]
defaultGroup = indexers
[indexAndForward]
index=true
selectiveIndexing=true
[tcpout:indexers]
server = 10.1.1.197:9997, 10.1.1.200:9997
[monitor:///var/log/messages/]
_INDEX_AND_FORWARD_ROUTING=local
[monitor:///var/log/httpd/]
_TCP_ROUTING=indexers
passwords.conf
The following are the spec and example files for passwords.conf.
passwords.conf.spec
# Version 7.2.1
#
# This file maintains the credential information for a given app in
Splunk Enterprise.
#
# There is no global, default passwords.conf. Instead, anytime a user
creates
# a new user or edit a user onwards hitting the storage endpoint
# will create this passwords.conf file which gets replicated
# in a search head clustering enviornment.
# Note that passwords.conf is only created from 6.3.0 release.
#
612
# You must restart Splunk Enterprise to reload manual changes to
passwords.conf.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# More details for storage endpoint is at
# http://blogs.splunk.com/2011/03/15/storing-encrypted-credentials/
[credential:<realm>:<username>:]
password = <password>
* Password that corresponds to the given username for the given realm.
Note that realm is optional
* The password can be in clear text, however when saved from splunkd the
password will always be encrypted
passwords.conf.example
# Version 7.2.1
#
# The following are example passwords.conf configurations. Configure
properties for
# your custom application.
#
# There is NO DEFAULT passwords.conf. The file only gets created once
you add/edit
# a credential information via the storage endpoint as follows.
#
# The POST request to add user1 credentials to the storage/password
endpoint
# curl -k -u admin:changeme
https://localhost:8089/servicesNS/nobody/search/storage/passwords -d
name=user1 -d password=changeme2
#
# The GET request to list all the credentials stored at the
storage/passwords endpoint
# curl -k -u admin:changeme
https://localhost:8089/services/storage/passwords
#
# To use one or more of these configurations, copy the configuration
block into
# passwords.conf in $SPLUNK_HOME/etc/<apps>/local/. You must restart
Splunk to
# enable configurations.
613
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
[credential::testuser:]
password = changeme
procmon-filters.conf
The following are the spec and example files for procmon-filters.conf.
procmon-filters.conf.spec
# Version 7.2.1
#
# *** DEPRECATED ***
#
#
# This file contains potential attribute/value pairs to use when
configuring
# Windows registry monitoring. The procmon-filters.conf file contains
the
# regular expressions you create to refine and filter the processes you
want
# Splunk to monitor. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<stanza name>]
proc = <string>
* Regex specifying process image that you want Splunk to monitor.
614
type = <string>
* Regex specifying the type(s) of process event that you want Splunk to
monitor.
hive = <string>
* Not used in this context, but should always have value ".*"
procmon-filters.conf.example
# Version 7.2.1
#
# This file contains example registry monitor filters. To create your
own
# filter, use the information in procmon-filters.conf.spec.
#
# To use one or more of these configurations, copy the configuration
block into
# procmon-filters.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[default]
hive = .*
[not-splunk-optimize]
proc = (?<!splunk-optimize.exe)$
type = create|exit|image
props.conf
The following are the spec and example files for props.conf.
props.conf.spec
# Version 7.2.1
#
# This file contains possible setting/value pairs for configuring Splunk
615
# software's processing properties via props.conf.
#
# Props.conf is commonly used for:
#
# * Configuring line breaking for multi-line events.
# * Setting up character set encoding.
# * Allowing processing of binary files.
# * Configuring timestamp recognition.
# * Configuring event segmentation.
# * Overriding automated host and source type matching. You can use
# props.conf to:
# * Configure advanced (regex-based) host and source type overrides.
# * Override source type matching for data from a particular source.
# * Set up rule-based source type recognition.
# * Rename source types.
# * Anonymizing certain types of sensitive incoming data, such as credit
# card or social security numbers, using sed scripts.
# * Routing specific events to a particular index, when you have
multiple
# indexes.
# * Creating new index-time field extractions, including header-based
field
# extractions.
# NOTE: We do not recommend adding to the set of fields that are
extracted
# at index time unless it is absolutely necessary because there
are
# negative performance implications.
# * Defining new search-time field extractions. You can define basic
# search-time field extractions entirely through props.conf, but a
# transforms.conf component is required if you need to create
search-time
# field extractions that involve one or more of the following:
# * Reuse of the same field-extracting regular expression across
# multiple sources, source types, or hosts.
# * Application of more than one regex to the same source, source
type,
# or host.
# * Delimiter-based field extractions (they involve field-value
pairs
# that are separated by commas, colons, semicolons, bars, or
# something similar).
# * Extraction of multiple values for the same field (multivalued
# field extraction).
# * Extraction of fields with names that begin with numbers or
# underscores.
# * Setting up lookup tables that look up fields from external sources.
# * Creating field aliases.
#
# NOTE: Several of the above actions involve a corresponding
transforms.conf
# configuration.
616
#
# You can find more information on these topics by searching the Splunk
# documentation (http://docs.splunk.com/Documentation/Splunk).
#
# There is a props.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a props.conf in $SPLUNK_HOME/etc/system/local/.
For
# help, see props.conf.example.
#
# You can enable configurations changes made to props.conf by typing
the
# following search string in Splunk Web:
#
# | extract reload=T
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For more information about using props.conf in conjunction with
# distributed Splunk deployments, see the Distributed Deployment Manual.
GLOBAL SETTINGS
[<spec>]
* This stanza enables properties for a given <spec>.
* A props.conf file can contain multiple stanzas for any number of
different <spec>.
* Follow this stanza name with any number of the following
setting/value
pairs, as appropriate for what you want to do.
* If you do not set an setting for a given <spec>, the default is used.
617
<spec> can be:
1. <sourcetype>, the source type of an event.
2. host::<host>, where <host> is the host, or host-matching pattern,
for an
event.
3. source::<source>, where <source> is the source, or source-matching
pattern, for an event.
4. rule::<rulename>, where <rulename> is a unique name of a source type
classification rule.
5. delayedrule::<rulename>, where <rulename> is a unique name of a
delayed
source type classification rule.
These are only considered as a last resort
before generating a new source type based
on the
source seen.
Example: [source::c:\\path_to\\file.txt]
When setting a [<spec>] stanza, you can use the following regex-type
syntax:
... recurses through directories until the match is met
or equivalently, matches any number of characters.
* matches anything but the path separator 0 or more times.
The path separator is '/' on unix, or '\' on windows.
Intended to match a partial or complete directory or filename.
| is equivalent to 'or'
( ) are used to limit scope of |.
\\ = matches a literal backslash '\'.
Example: [source::....(?<!tar.)(gz|bz2)]
This matches any file ending with '.gz' or '.bz2', provided this is not
618
preceded by 'tar.', so tar.bz2 and tar.gz would not be matched.
Match expressions must match the entire name, not just a substring. If
you
are familiar with regular expressions, match expressions are based on a
full
implementation of PCRE with the translation of ..., * and . Thus .
matches a
period, * matches non-directory separators, and ... matches any number
of
any characters.
For more information search the Splunk documentation for "specify input
paths with wildcards".
However, suppose two [<spec>] stanzas supply the same setting. In this
case,
Splunk software chooses the value to apply based on the ASCII order of
the patterns in question.
source::az
[source::...a...]
sourcetype = a
[source::...z...]
sourcetype = z
[source::...a...]
sourcetype = a
priority = 5
619
[source::...z...]
sourcetype = z
priority = 10
For example:
[host::foo]
FIELDALIAS-a = a AS one
[host::(?-i)bar]
FIELDALIAS-b = b AS two
The first stanza will actually apply to events with host values of
"FOO" or
"Foo" . The second stanza, on the other hand, will not apply to events
with
host values of "BAR" or "Bar".
NOTE: Setting the priority key to a value greater than 100 causes the
pattern-matched [<spec>] stanzas to override the values of the
literal-matching [<spec>] stanzas.
620
be aware
that the priority key does *not* affect precedence across <spec> types.
For
example, [<spec>] stanzas with [source::<source>] patterns take
priority over
stanzas with [host::<host>] and [<sourcetype>] patterns, regardless of
their
respective priority key values.
#******************************************************************************
# The possible setting/value pairs for props.conf, and their
# default values, are:
#******************************************************************************
priority = <number>
* Overrides the default ASCII ordering of matching stanza names
CHARSET = <string>
* When set, Splunk software assumes the input from the given [<spec>]
is in
the specified encoding.
* Can only be used as the basis of [<sourcetype>] or [source::<spec>],
not [host::<spec>].
* A list of valid encodings can be retrieved using the command "iconv
-l" on
most *nix systems.
* If an invalid encoding is specified, a warning is logged during
initial
configuration and further input from that [<spec>] is discarded.
* If the source encoding is valid, but some characters from the
[<spec>] are
not valid in the specified encoding, then the characters are escaped
as
hex (for example, "\xF3").
* When set to "AUTO", Splunk software attempts to automatically
determine the
character encoding and convert text from that encoding to UTF-8.
* For a complete list of the character sets Splunk software
automatically
detects, see the online documentation.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to ASCII.
621
Line breaking
622
** Special considerations for LINE_BREAKER with branched expressions **
623
* A line ending with 'end2' followed by a line beginning with
'begin2'
would match the second branch and the second capturing group would
have
a match. That second capturing group would become the linebreak
according to rule 2, and the associated newline would become a break
between lines.
* The text 'begin3' anywhere in the file at all would match the third
branch, and there would be no capturing group with a match. A
linebreak
would be assumed immediately prior to the text 'begin3' so a
linebreak
would be inserted prior to this text in accordance with rule 3.
This
means that a linebreak will occur before the text 'begin3' at any
point in the text, whether a linebreak character exists or not.
LINE_BREAKER = end2?(\n)begin(2|3)?
LINE_BREAKER_LOOKBEHIND = <integer>
* When there is leftover data from a previous raw chunk,
LINE_BREAKER_LOOKBEHIND indicates the number of bytes before the end
of
the raw chunk (with the next chunk concatenated) that Splunk applies
the
LINE_BREAKER regex. You may want to increase this value from its
default
if you are dealing with especially large or multi-line events.
* Defaults to 100 (bytes).
SHOULD_LINEMERGE = [true|false]
* When set to true, Splunk software combines several lines of data into
a single
multi-line event, based on the following configuration settings.
* Defaults to true.
BREAK_ONLY_BEFORE_DATE = [true|false]
* When set to true, Splunk software creates a new event only if it
encounters
a new line with a date.
624
* Note, when using DATETIME_CONFIG = CURRENT or NONE, this setting is
not
meaningful, as timestamps are not identified.
* Defaults to true.
MAX_EVENTS = <integer>
* Specifies the maximum number of input lines to add to any event.
* Splunk software breaks after the specified number of lines are read.
* Defaults to 256 (lines).
# Use the following settings to handle better load balancing from UF.
# Please note the EVENT_BREAKER properties are applicable for Splunk
Universal
# Forwarder instances only.
EVENT_BREAKER_ENABLE = [true|false]
* When set to true, Splunk software will split incoming data with a
light-weight chunked line breaking processor so that data is
distributed
fairly evenly amongst multiple indexers. Use this setting on the UF to
625
# Use the following to define event boundaries for multi-line events
# For single-line events, the default settings should suffice
626
regex
in event text before attempting to extract a timestamp.
* The timestamping algorithm only looks for a timestamp in the text
following the end of the first regex match.
* For example, if TIME_PREFIX is set to "abc123", only text following
the
first occurrence of the text abc123 will be used for timestamp
extraction.
* If the TIME_PREFIX cannot be found in the event text, timestamp
extraction
will not occur.
* Defaults to empty.
MAX_TIMESTAMP_LOOKAHEAD = <integer>
* Specifies how far (in characters) into an event Splunk software should
look
for a timestamp.
* This constraint to timestamp extraction is applied from the point of
the
TIME_PREFIX-set location.
* For example, if TIME_PREFIX positions a location 11 characters into
the
event, and MAX_TIMESTAMP_LOOKAHEAD is set to 10, timestamp extraction
will
be constrained to characters 11 through 20.
* If set to 0, or -1, the length constraint for timestamp recognition is
effectively disabled. This can have negative performance implications
which scale with the length of input lines (or with event size when
LINE_BREAKER is redefined for event splitting).
* Defaults to 128 (characters).
TZ = <timezone identifier>
* The algorithm for determining the time zone for a particular event is
as
follows:
* If the event has a timezone in its raw text (for example, UTC,
627
-08:00),
use that.
* If TZ is set to a valid timezone string, use that.
* If the event was forwarded, and the forwarder-indexer connection is
using
the 6.0+ forwarding protocol, use the timezone provided by the
forwarder.
* Otherwise, use the timezone of the system that is running splunkd.
* Defaults to empty.
TZ_ALIAS = <key=value>[,<key=value>]...
* Provides Splunk software admin-level control over how timezone strings
extracted from events are interpreted.
* For example, EST can mean Eastern (US) Standard time, or Eastern
(Australian) Standard time. There are many other three letter
timezone
acronyms with many expansions.
* There is no requirement to use TZ_ALIAS if the traditional Splunk
software
default mappings for these values have been as expected. For example,
EST
maps to the Eastern US by default.
* Has no effect on TZ value; this only affects timezone strings from
event
text, either from any configured TIME_FORMAT, or from pattern-based
guess
fallback.
* The setting is a list of key=value pairs, separated by commas.
* The key is matched against the text of the timezone specifier of
the
event, and the value is the timezone specifier to use when mapping
the
timestamp to UTC/GMT.
* The value is another TZ specifier which expresses the desired
offset.
* Example: TZ_ALIAS = EST=GMT+10:00 (See props.conf.example for
more/full
examples)
* Defaults to unset.
MAX_DAYS_AGO = <integer>
* Specifies the maximum number of days in the past, from the current
date as
provided by input layer(For e.g. forwarder current time, or modtime
for files),
that an extracted date can be valid. Splunk software still indexes
events
with dates older than MAX_DAYS_AGO with the timestamp of the last
acceptable
event. If no such acceptable event exists, new events with timestamps
older
than MAX_DAYS_AGO will use the current timestamp.
628
* For example, if MAX_DAYS_AGO = 10, Splunk software applies the
timestamp
of the last acceptable event to events with extracted timestamps
older
than 10 days in the past. If no acceptable event exists, Splunk
software
applies the current timestamp.
* Defaults to 2000 (days), maximum 10951.
* IMPORTANT: If your data is older than 2000 days, increase this
setting.
MAX_DAYS_HENCE = <integer>
* Specifies the maximum number of days in the future, from the current
date as
provided by input layer(For e.g. forwarder current time, or modtime
for
files), that an extracted date can be valid. Splunk software still
indexes
events with dates more than MAX_DAYS_HENCE in the future with the
timestamp
of the last acceptable event. If no such acceptable event exists, new
events
with timestamps after MAX_DAYS_HENCE will use the current timestamp.
* For example, if MAX_DAYS_HENCE = 3, Splunk software applies the
timestamp of
the last acceptable event to events with extracted timestamps more
than 3
days in the future. If no acceptable event exists, Splunk software
applies
the current timestamp.
* The default value includes dates from one day in the future.
* If your servers have the wrong date set or are in a timezone that is
one
day ahead, increase this value to at least 3.
* Defaults to 2 (days), maximum 10950.
* IMPORTANT: False positives are less likely with a tighter window,
change
with caution.
MAX_DIFF_SECS_AGO = <integer>
* This setting prevents Splunk software from rejecting events with
timestamps
that are out of order.
* Do not use this setting to filter events because Splunk software uses
complicated heuristics for time parsing.
* Splunk software warns you if an event timestamp is more than <integer>
seconds BEFORE the previous timestamp and does not have the same time
format as the majority of timestamps from the source.
* After Splunk software throws the warning, it only rejects an event if
it
cannot apply a timestamp to the event (for example, if Splunk software
cannot recognize the time of the event.)
629
* IMPORTANT: If your timestamps are wildly out of order, consider
increasing
this value.
* Note: if the events contain time but not date (date determined
another way,
such as from a filename) this check will only consider the hour. (No
one
second granularity for this purpose.)
* Defaults to 3600 (one hour), maximum 2147483646.
MAX_DIFF_SECS_HENCE = <integer>
* This setting prevents Splunk software from rejecting events with
timestamps
that are out of order.
* Do not use this setting to filter events because Splunk software uses
complicated heuristics for time parsing.
* Splunk software warns you if an event timestamp is more than <integer>
seconds AFTER the previous timestamp and does not have the same time
format
as the majority of timestamps from the source.
* After Splunk software throws the warning, it only rejects an event if
it
cannot apply a timestamp to the event (for example, if Splunk software
cannot recognize the time of the event.)
* IMPORTANT: If your timestamps are wildly out of order, or you have
logs that
are written less than once a week, consider increasing this value.
* Defaults to 604800 (one week), maximum 2147483646.
ADD_EXTRA_TIME_FIELDS = [true|false]
* This setting controls whether or not the following keys will be
automatically
generated and indexed with events:
date_hour, date_mday, date_minute, date_month, date_second,
date_wday,
date_year, date_zone, timestartpos, timeendpos, timestamp.
* These fields are never required, and may be turned off as desired.
* Defaults to true and is enabled for most data sources.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
630
The
# settings that can use these characters specifically mention that
# capability in their descriptions below.
# \f : form feed byte: 0x0c
# \s : space byte: 0x20
# \t : horizontal tab byte: 0x09
# \v : vertical tab byte: 0x0b
INDEXED_EXTRACTIONS = <CSV|TSV|PSV|W3C|JSON|HEC>
* Tells Splunk software the type of file and the extraction and/or
parsing
method Splunk software should use on the file.
CSV - Comma separated value format
TSV - Tab-separated value format
PSV - pipe "|" separated value format
W3C - W3C Extended Extended Log File Format
JSON - JavaScript Object Notation format
HEC - Interpret file as a stream of JSON events in the same format
as the HTTP Event Collector input.
* These settings default the values of the remaining settings to the
appropriate values for these known formats.
* Keep in mind that the HTTP Event Collector format allows the event
to override many details on a per-event basis, such as the destination
index. It should be only used to read data which is known to be
well-formatted and safe, such as data output by locally written
tools.
* Defaults to unset.
METRICS_PROTOCOL = <STATSD|COLLECTD_HTTP>
* Tells Splunk software which protocol the incoming metric data is
using:
STATSD - Supports statsd protocol, in the following format:
<metric name>:<value>|<metric type>
Use STATSD-DIM-TRANSFORMS setting to manually extract
dimensions for the above format. Splunk software
auto-extracts
dimensions when the data has "#" as dimension delimiter
as shown below:
<metric name>:<value>|<metric type>|#<dim1>:<val1>,
<dim2>:<val2>...
COLLECTD_HTTP - This is data from the write_http collectd plugin being
parsed
as streaming JSON docs with the _value living in "values"
array
and the dimension names in "dsnames" and the metric type
(for example, counter vs gauge) is derived from "dstypes".
* Defaults to unset, for event (non-metric) data.
STATSD-DIM-TRANSFORMS =
<statsd_dim_stanza_name1>,<statsd_dim_stanza_name2>..
* Used only when METRICS_PROTOCOL is set as statsd
* A comma separated list of transforms stanza names which are used to
631
extract
dimensions from statsd metric data.
* Optional for sourcetype which has only one transforms stanza for
extracting
dimensions and the stanza name is the same as that of sourcetype's
name.
METRIC-SCHEMA-TRANSFORMS =
<metric-schema:stanza_name>[,<metric-schema:stanza_name>]...
* NOTE: This setting is valid only for index-time field extractions.
You can set up the TRANSFORMS field extraction configuration to create
index-time field extractions. The Splunk platform always applies
METRIC-SCHEMA-TRANSFORMS after index-time field extraction takes
place.
* Optional.
* A comma-separated list of metric-schema stanza names from
transforms.conf
that the Splunk platform uses to create multiple metrics from
index-time
field extractions of a single log event.
* Default: empty
PREAMBLE_REGEX = <regex>
* Some files contain preamble lines. This setting specifies a regular
expression which allows Splunk software to ignore these preamble
lines,
based on the pattern specified.
FIELD_HEADER_REGEX = <regex>
* A regular expression that specifies a pattern for prefixed headers.
Note
that the actual header starts after the pattern and it is not included
in
the header field.
* This setting supports the use of the special characters described
above.
HEADER_FIELD_LINE_NUMBER = <integer>
* Tells Splunk software the line number of the line within the file that
contains the header fields. If set to 0, Splunk software attempts to
locate the header fields within the file automatically.
* The default value is set to 0.
FIELD_DELIMITER = <character>
* Tells Splunk software which character delimits or separates fields in
the
specified file or source.
* This setting supports the use of the special characters described
above.
HEADER_FIELD_DELIMITER = <character>
* Tells Splunk software which character delimits or separates header
632
fields in
the specified file or source.
* This setting supports the use of the special characters described
above.
FIELD_QUOTE = <character>
* Tells Splunk software the character to use for quotes in the specified
file
or source.
* This setting supports the use of the special characters described
above.
HEADER_FIELD_QUOTE = <character>
* Specifies the character to use for quotes in the header of the
specified file or source.
* This setting supports the use of the special characters described
above.
MISSING_VALUE_REGEX = <regex>
* Tells Splunk software the placeholder to use in events where no value
is
present.
JSON_TRIM_BRACES_IN_ARRAY_NAMES = <bool>
* Tell the json parser not to add the curly braces to array names.
* Note that enabling this will make json index-time extracted array
field names
inconsistent with spath search processor's naming convention.
* For a json document containing the following array object, with
trimming
enabled a indextime field 'mount_point' will be generated instead of
the
spath consistent field 'mount_point{}'
"mount_point": ["/disk48","/disk22"]
* Defaults to false.
633
Field extraction configuration
There are three different "field extraction types" that you can use to
configure field extractions: TRANSFORMS, REPORT, and EXTRACT. They
differ in
two significant ways: 1) whether they create indexed fields (fields
extracted at index time) or extracted fields (fields extracted at search
time), and 2), whether they include a reference to an additional
component
called a "field transform," which you define separately in
transforms.conf.
There are times when you may find that you need to change or add to
your set
of indexed fields. For example, you may have situations where certain
search-time field extractions are noticeably impacting search
performance.
This can happen when the value of a search-time extracted field exists
outside of the field more often than not. For example, if you commonly
search a large event set with the expression company_id=1 but the value
1
occurs in many events that do *not* have company_id=1, you may want to
add
company_id to the list of fields extracted by Splunk software at index
time.
This is because at search time, Splunk software will want to check each
instance of the value 1 to see if it matches company_id, and that kind
of
thing slows down performance when you have Splunk searching a large set
of
data.
634
like
company_id!=1 or NOT company_id=1, and the field company_id nearly
*always*
takes on the value 1, you may want to add company_id to the list of
fields
extracted by Splunk software at index time.
It's a good question. And much of the time, EXTRACT is all you need for
search-time field extraction. But when you build search-time field
extractions, there are specific cases that require the use of REPORT and
the
field transform that it references. Use REPORT if you want to:
635
presents field-value pairs (or just field values) separated by
delimiters
such as commas, spaces, bars, and so on.
* Configure extractions for multivalued fields. You can have Splunk
software
append additional values to a field as it finds them in the event
data.
* Extract fields with names beginning with numbers or underscores.
Ordinarily, the key cleaning functionality removes leading numeric
characters and underscores from field names. If you need to keep them,
configure your field transform to turn key cleaning off.
* Manage formatting of extracted fields, in cases where you are
extracting
multiple fields, or are extracting both the field name and field
value.
TRANSFORMS-<class> = <transform_stanza_name>,
<transform_stanza_name2>,...
* Used for creating indexed fields (index-time field extractions).
* <class> is a unique literal string that identifies the namespace of
the
field you're extracting.
**Note:** <class> values do not have to follow field name syntax
restrictions. You can use characters other than a-z, A-Z, and 0-9,
and
spaces are allowed. <class> values are not subject to key cleaning.
* <transform_stanza_name> is the name of your stanza from
transforms.conf.
* Use a comma-separated list to apply multiple transform stanzas to a
single
TRANSFORMS extraction. Splunk software applies them in the list order.
For
example, this sequence ensures that the [yellow] transform stanza gets
applied first, then [blue], and then [red]:
[source::color_logs]
TRANSFORMS-colorchange = yellow, blue, red
636
REPORT-<class> = <transform_stanza_name>, <transform_stanza_name2>,...
* Used for creating extracted fields (search-time field extractions)
that
reference one or more transforms.conf stanzas.
* <class> is a unique literal string that identifies the namespace of
the
field you're extracting.
**Note:** <class> values do not have to follow field name syntax
restrictions. You can use characters other than a-z, A-Z, and 0-9,
and
spaces are allowed. <class> values are not subject to key cleaning.
* <transform_stanza_name> is the name of your stanza from
transforms.conf.
* Use a comma-separated list to apply multiple transform stanzas to a
single
REPORT extraction.
Splunk software applies them in the list order. For example, this
sequence
insures that the [yellow] transform stanza gets applied first, then
[blue],
and then [red]:
[source::color_logs]
REPORT-colorchange = yellow, blue, red
637
index time or has been derived from an EXTRACT-<class> configuration
whose <class> ASCII value is *higher* than the configuration in
which
you are attempting to extract the field. For example, if you
have an EXTRACT-ZZZ configuration that extracts <src_field>, then
you can only use 'in <src_field>' in an EXTRACT configuration with
a <class> of 'aaa' or lower, as 'aaa' is lower in ASCII value
than 'ZZZ'.
* It cannot be a field that has been derived from a transform field
extraction (REPORT-<class>), an automatic key-value field
extraction
(in which you configure the KV_MODE setting to be something other
than 'none'), a field alias, a calculated field, or a lookup,
as these operations occur after inline field extractions (EXTRACT-
<class>) in the search time operations sequence.
* If your regex needs to end with 'in <string>' where <string> is *not*
a
field name, change the regex to end with '[i]n <string>' to ensure
that
Splunk software doesn't try to match <string> to a field name.
KV_MODE = [none|auto|auto_escaped|multi|json|xml]
* Used for search-time field extractions only.
* Specifies the field/value extraction mode for the data.
* Set KV_MODE to one of the following:
* none: if you want no field/value extraction to take place.
* auto: extracts field/value pairs separated by equal signs.
* auto_escaped: extracts fields/value pairs separated by equal signs
and
honors \" and \\ as escaped sequences within quoted
values, e.g field="value with \"nested\" quotes"
* multi: invokes the multikv search command to expand a tabular event
into
multiple events.
* xml : automatically extracts fields from XML data.
* json: automatically extracts fields from JSON data.
* Setting to 'none' can ensure that one or more user-created regexes are
not
overridden by automatic field/value extraction for a particular host,
source, or source type, and also increases search performance.
* Defaults to auto.
* The 'xml' and 'json' modes will not extract any fields when used on
data
that isn't of the correct format (JSON or XML).
MATCH_LIMIT = <integer>
* Only set in props.conf for EXTRACT type field extractions.
For REPORT and TRANSFORMS field extractions, set this in
transforms.conf.
* Optional. Limits the amount of resources that will be spent by PCRE
when running patterns that will not match.
* Use this to set an upper bound on how many times PCRE calls an
638
internal
function, match(). If set too low, PCRE may fail to correctly match a
pattern.
* Defaults to 100000
DEPTH_LIMIT = <integer>
* Only set in props.conf for EXTRACT type field extractions.
For REPORT and TRANSFORMS field extractions, set this in
transforms.conf.
* Optional. Limits the amount of resources that are spent by PCRE
when running patterns that will not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
function, match(). If set too low, PCRE might fail to correctly match
a pattern.
* Default: 1000
AUTO_KV_JSON = [true|false]
* Used for search-time field extractions only.
* Specifies whether to try json extraction automatically.
* Defaults to true.
KV_TRIM_SPACES = true|false
* Modifies the behavior of KV_MODE when set to auto, and auto_escaped.
* Traditionally, automatically identified fields have leading and
trailing
whitespace removed from their values.
* Example event: 2014-04-04 10:10:45 myfield=" apples "
would result in a field called 'myfield' with a value of 'apples'.
* If this value is set to false, then external whitespace then this
outer
space is retained.
* Example: 2014-04-04 10:10:45 myfield=" apples "
would result in a field called 'myfield' with a value of ' apples '.
* The trimming logic applies only to space characters, not tabs, or
other
whitespace.
* NOTE: Splunk Web currently has limitations with displaying and
interactively clicking on fields that have leading or trailing
whitespace. Field values with leading or trailing spaces may not look
distinct in the event viewer, and clicking on a field value will
typically
insert the term into the search string without its embedded spaces.
* These warts are not specific to this feature. Any such embedded
spaces
will behave this way.
* The Splunk search language and included commands will respect the
spaces.
* Defaults to true.
CHECK_FOR_HEADER = [true|false]
* Used for index-time field extractions only.
* Set to true to enable header-based field extraction for a file.
639
* If the file has a list of columns and each event contains a field
value
(without field name), Splunk software picks a suitable header line to
use for extracting field names.
* Can only be used on the basis of [<sourcetype>] or [source::<spec>],
not [host::<spec>].
* Disabled when LEARN_SOURCETYPE = false.
* Will cause the indexed source type to have an appended numeral; for
example, sourcetype-2, sourcetype-3, and so on.
* The field names are stored in etc/apps/learned/local/props.conf.
* Because of this, this feature will not work in most environments
where
the data is forwarded.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to false.
640
* <new_field_name> is the alias to assign to the field.
* You can include multiple field alias renames in the same stanza.
* Field aliasing is performed at search time, after field extraction,
but
before calculated fields (EVAL-* statements) and lookups.
This means that:
* Any field extracted at search time can be aliased.
* You can specify a lookup based on a field alias.
* You cannot alias a calculated field.
641
output
for each matching event.
* If the output field list starts with the keyword "OUTPUTNEW" instead
of
"OUTPUT", then each output field is only written out if it did not
previous
exist. Otherwise, the output fields are always overridden. Any event
that
has all of the <match_field> values but no matching entry in the
lookup
table clears all of the output fields. NOTE that OUTPUTNEW behavior
has
changed since 4.1.x (where *none* of the output fields were written to
if
*any* of the output fields previously existed).
* Splunk software processes lookups after it processes field
extractions,
field aliases, and calculated fields (EVAL-* statements). This means
that you
can use extracted fields, aliased fields, and calculated fields to
specify
lookups. But you can't use fields discovered by lookups in the
configurations of extracted fields, aliased fields, or calculated
fields.
* The LOOKUP- prefix is actually case-insensitive. Acceptable variants
include:
LOOKUP_<class> = [...]
LOOKUP<class> = [...]
lookup_<class> = [...]
lookup<class> = [...]
NO_BINARY_CHECK = [true|false]
* When set to true, Splunk software processes binary files.
* Can only be used on the basis of [<sourcetype>], or
[source::<source>],
not [host::<host>].
* Defaults to false (binary files are ignored).
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
detect_trailing_nulls = [auto|true|false]
* When enabled, Splunk software tries to avoid reading in null bytes at
the end of a file.
* When false, Splunk software assumes that all the bytes in the file
642
should
be read and indexed.
* Set this value to false for UTF-16 and other encodings (CHARSET)
values
that can have null bytes as part of the character text.
* Subtleties of 'true' vs 'auto':
* 'true' is the splunk-on-windows historical behavior of trimming all
null
bytes.
* 'auto' is currently a synonym for true but will be extended to be
sensitive to the charset selected (ie quantized for
multi-byte
encodings, and disabled for unsafe variable-width encodings)
* This feature was introduced to work around programs which foolishly
preallocate their log files with nulls and fill in data later. The
well-known case is Internet Information Server.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to false on *nix, true on windows.
Segmentation configuration
SEGMENTATION = <segmenter>
* Specifies the segmenter from segmenters.conf to use at index time for
the
host, source, or sourcetype specified by <spec> in the stanza heading.
* Defaults to indexing.
643
File checksum configuration
CHECK_METHOD = [endpoint_md5|entire_md5|modtime]
* Set CHECK_METHOD = endpoint_md5 to have Splunk software checksum of
the
first and last 256 bytes of a file. When it finds matches, Splunk
software
lists the file as already indexed and indexes only new data, or
ignores it if
there is no new data.
* Set CHECK_METHOD = entire_md5 to use the checksum of the entire file.
* Set CHECK_METHOD = modtime to check only the modification time of the
file.
* Settings other than endpoint_md5 cause Splunk software to index the
entire
file for each detected change.
* Important: this option is only valid for [source::<source>] stanzas.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to endpoint_md5.
initCrcLength = <integer>
* See documentation in inputs.conf.spec.
PREFIX_SOURCETYPE = [true|false]
* NOTE: this setting is only relevant to the "[too_small]" sourcetype.
* Determines the source types that are given to files smaller than 100
lines, and are therefore not classifiable.
* PREFIX_SOURCETYPE = false sets the source type to "too_small."
* PREFIX_SOURCETYPE = true sets the source type to
"<sourcename>-too_small",
where "<sourcename>" is a cleaned up version of the filename.
* The advantage of PREFIX_SOURCETYPE = true is that not all small
files
are classified as the same source type, and wildcard searching is
often
effective.
* For example, a Splunk search of "sourcetype=access*" will retrieve
"access" files as well as "access-too_small" files.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
644
data.
* Defaults to true.
Sourcetype configuration
sourcetype = <string>
* Can only be set for a [source::...] stanza.
* Anything from that <source> is assigned the specified source type.
* Is used by file-based inputs, at input time (when accessing logfiles)
such
as on a forwarder, or indexer monitoring local files.
* sourcetype assignment settings on a system receiving forwarded Splunk
data
will not be applied to forwarded data.
* For log files read locally, data from log files matching <source> is
assigned the specified source type.
* Defaults to empty.
# The following setting/value pairs can only be set for a stanza that
# begins with [<sourcetype>]:
rename = <string>
* Renames [<sourcetype>] as <string> at search time
* With renaming, you can search for the [<sourcetype>] with
sourcetype=<string>
* To search for the original source type without renaming it, use the
field _sourcetype.
* Data from a a renamed sourcetype will only use the search-time
configuration for the target sourcetype. Field extractions
(REPORTS/EXTRACT) for this stanza sourcetype will be ignored.
* Defaults to empty.
invalid_cause = <string>
* Can only be set for a [<sourcetype>] stanza.
* If invalid_cause is set, the Tailing code (which handles uncompressed
logfiles) will not read the data, but hand it off to other components
or
throw an error.
* Set <string> to "archive" to send the file to the archive processor
(specified in unarchive_cmd).
* When set to "winevt", this causes the file to be handed off to the
Event Log input processor.
* Set to any other string to throw an error in the splunkd.log if you
are
running Splunklogger in debug mode.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
645
the
data.
* Defaults to empty.
is_valid = [true|false]
* Automatically set by invalid_cause.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* DO NOT SET THIS.
* Defaults to true.
force_local_processing = [true|false]
* Forces a universal forwarder to process all data tagged with this
sourcetype
locally before forwarding it to the indexers.
* Data with this sourcetype will be processed via the linebreaker,
aggerator and the regexreplacement processors in addition to the
existing
utf8 processor.
* Note that switching this property on will potentially increase the
cpu
and memory consumption of the forwarder.
* Applicable only on a universal forwarder.
* Defaults to false.
unarchive_cmd = <string>
* Only called if invalid_cause is set to "archive".
* This field is only valid on [source::<source>] stanzas.
* <string> specifies the shell command to run to extract an archived
source.
* Must be a shell command that takes input on stdin and produces output
on
stdout.
* Use _auto for Splunk software's automatic handling of archive files
(tar,
tar.gz, tgz, tbz, tbz2, zip)
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to empty.
unarchive_sourcetype = <string>
* Sets the source type of the contents of the matching archive file.
Use
this field instead of the sourcetype field to set the source type of
archive files that have the following extensions: gz, bz, bz2, Z.
* If this field is empty (for a matching archive file props lookup)
Splunk
software strips off the archive file's extension (.gz, bz etc) and
646
lookup
another stanza to attempt to determine the sourcetype.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to empty.
LEARN_SOURCETYPE = [true|false]
* Determines whether learning of known or unknown sourcetypes is
enabled.
* For known sourcetypes, refer to LEARN_MODEL.
* For unknown sourcetypes, refer to the rule:: and delayedrule::
configuration (see below).
* Setting this field to false disables CHECK_FOR_HEADER as well (see
above).
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to true.
LEARN_MODEL = [true|false]
* For known source types, the file classifier adds a model file to the
learned directory.
* To disable this behavior for diverse source types (such as sourcecode,
where there is no good example to make a sourcetype) set LEARN_MODEL =
false.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to true.
maxDist = <integer>
* Determines how different a source type model may be from the current
file.
* The larger the maxDist value, the more forgiving Splunk software will
be
with differences.
* For example, if the value is very small (for example, 10), then
files
of the specified sourcetype should not vary much.
* A larger value indicates that files of the given source type can
vary
quite a bit.
* If you're finding that a source type model is matching too broadly,
reduce
its maxDist value by about 100 and try again. If you're finding that a
source type model is being too restrictive, increase its maxDist value
by
about 100 and try again.
647
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to 300.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
An example:
[rule::bar_some]
sourcetype = source_with_lots_of_bars
# if more than 80% of lines have "----", but fewer than 70% have
"####"
# declare this a "source_with_lots_of_bars"
MORE_THAN_80 = ----
LESS_THAN_70 = ####
A rule can have many MORE_THAN and LESS_THAN patterns, and all are
required
for the rule to match.
ANNOTATE_PUNCT = [true|false]
* Determines whether to index a special token starting with "punct::"
* The "punct::" key contains punctuation in the text of the event.
It can be useful for finding similar events
* If it is not useful for your dataset, or if it ends up taking
too much space in your index it is safe to disable it
* Defaults to true.
648
HEADER_MODE = <empty> | always | firstline | none
* Determines whether to use the inline ***SPLUNK*** directive to rewrite
index-time fields.
* If "always", any line with ***SPLUNK*** can be used to rewrite
index-time fields.
* If "firstline", only the first line can be used to rewrite
index-time fields.
* If "none", the string ***SPLUNK*** is treated as normal data.
* If <empty>, scripted inputs take the value "always" and file inputs
take the value "none".
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Defaults to <empty>.
Internal settings
_actions = <string>
* Internal field used for user-interface control of objects.
* Defaults to "new,edit,delete".
pulldown_type = <bool>
* Internal field used for user-interface control of source types.
* Defaults to empty.
given_type = <string>
* Internal field used by the CHECK_FOR_HEADER feature to remember the
original sourcetype.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring
the
data.
* Default to unset.
description = <string>
* Field used to describe the sourcetype. Does not affect indexing
behavior.
* Defaults to unset.
649
category = <string>
* Field used to classify sourcetypes for organization in the front end.
Case
sensitive. Does not affect indexing behavior.
* Defaults to unset.
props.conf.example
# Version 7.2.1
#
# The following are example props.conf configurations. Configure
properties for
# your data.
#
# To use one or more of these configurations, copy the configuration
block into
# props.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
########
# Line merging settings
########
[apache_error]
SHOULD_LINEMERGE = True
########
# Settings for tuning
########
[host::small_events]
650
TRUNCATE = 256
[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE
SHOULD_LINEMERGE = false
########
# Timestamp extraction configuration
########
# The following example sets Eastern Time Zone if host matches nyc*.
[host::nyc*]
TZ = US/Eastern
# The following example uses a custom datetime.xml that has been created
and
# placed in a custom app directory. This sets all events coming in from
hosts
# starting with dharma to use this custom file.
[host::dharma*]
DATETIME_CONFIG = <etc/apps/custom_time/datetime.xml>
########
## Timezone alias configuration
########
TZ_ALIAS = EST=GMT+10:00,EDT=GMT+11:00
# The following example gives a sample case wherein, one timezone field
is
# being replaced by/interpreted as another.
TZ_ALIAS = EST=AEST,EDT=AEDT
651
########
# Transform configuration
########
[host::foo]
TRANSFORMS-foo=foobar
########
# Sourcetype configuration
########
[source::.../web_access.log]
sourcetype = splunk_web_access
# The following example sets a sourcetype for the Windows file iis6.log.
Note:
# Backslashes within Windows file paths must be escaped.
[source::...\\iis\\iis6.log]
sourcetype = iis_access
[syslog]
invalid_cause = archive
unarchive_cmd = gzip -cd -
# The following example learns a custom sourcetype and limits the range
between
652
# different examples with a smaller than default maxDist.
[custom_sourcetype]
LEARN_MODEL = true
maxDist = 30
[rule::bar_some]
sourcetype = source_with_lots_of_bars
MORE_THAN_80 = ----
[delayedrule::baz_some]
sourcetype = my_sourcetype
LESS_THAN_70 = ####
########
# File configuration
########
[imported_records]
NO_BINARY_CHECK = true
[source::.../web_access/*]
CHECK_METHOD = entire_md5
########
# Metric configuration
########
653
STATSD-DIM-TRANSFORMS = regex_stanza1, regex_stanza2
pubsub.conf
The following are the spec and example files for pubsub.conf.
pubsub.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values for configuring a
client of
# the PubSub system (broker).
#
# To set custom configurations, place a pubsub.conf in
# $SPLUNK_HOME/etc/system/local/.
# For examples, see pubsub.conf.example. You must restart Splunk to
enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
654
GLOBAL SETTINGS
#******************************************************************
# Configure the physical location where deploymentServer is running.
# This configuration is used by the clients of the pubsub system.
#******************************************************************
[pubsub-server:deploymentServer]
targetUri = <IP:Port>|<hostname:Port>|direct
* specify either the url of a remote server in case the broker is
remote, or
just the keyword "direct" when broker is in-process.
* It is usually a good idea to co-locate the broker and the Deployment
Server
on the same Splunk. In such a configuration, all
* deployment clients would have targetUri set to deploymentServer:port.
#******************************************************************
# The following section is only relevant to Splunk developers.
#******************************************************************
[pubsub-server:direct]
disabled = false
targetUri = direct
655
[pubsub-server:<logicalName>]
targetUri = <IP:Port>|<hostname:Port>|direct
* The Uri of a Splunk that is being used as a broker.
* The keyword "direct" implies that the client is running on the same
Splunk
instance as the broker.
pubsub.conf.example
# Version 7.2.1
[pubsub-server:deploymentServer]
disabled=false
targetUri=somehost:8089
[pubsub-server:internalbroker]
disabled=false
targetUri=direct
restmap.conf
The following are the spec and example files for restmap.conf.
restmap.conf.spec
# Version 7.2.1
#
# This file contains possible attribute and value pairs for creating new
# Representational State Transfer (REST) endpoints.
#
# There is a restmap.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a restmap.conf in
656
$SPLUNK_HOME/etc/system/local/. For
# help, see restmap.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# NOTE: You must register every REST endpoint via this file to make it
# available.
###########################
# Global stanza
[global]
* This stanza sets global configurations for all REST endpoints.
* Follow this stanza name with any number of the following
attribute/value
pairs.
allowGetAuth=[true|false]
* Allow user/password to be passed as a GET parameter to endpoint
services/auth/login.
* Setting this to true, while convenient, may result in user/password
getting
logged as cleartext in Splunk's logs *and* any proxy servers in
between.
* Defaults to false.
allowRestReplay=[true|false]
* POST/PUT/DELETE requests can be replayed on other nodes in the
deployment.
* This enables centralized management.
* Turn on or off this feature. You can also control replay at each
endpoint
level. This feature is currently INTERNAL and should not be turned on
witout
consulting splunk support.
* Defaults to false
defaultRestReplayStanza=<string>
* Points to global rest replay configuration stanza.
* Related to allowRestReplay
* Defaults to "restreplayshc"
pythonHandlerPath=<path>
* Path to 'main' python script handler.
* Used by the script handler to determine where the actual 'main'
script is
located.
657
* Typically, you should not need to change this.
* Defaults to $SPLUNK_HOME/bin/rest_handler.py.
###########################
# Applicable to all REST stanzas
# Stanza definitions below may supply additional information for these.
#
requireAuthentication=[true|false]
* This optional attribute determines if this endpoint requires
authentication.
* Defaults to 'true'.
authKeyStanza=<stanza>
* This optional attribute determines the location of the pass4SymmKey in
the
server.conf to be used for endpoint authentication.
* Defaults to 'general' stanza.
* Only applicable if the requireAuthentication is set true.
restReplay=[true|false]
* This optional attribute enables rest replay on this endpoint group
* Related to allowRestReplay
* This feature is currently INTERNAL and should not be turned on without
consulting
splunk support.
* Defaults to false
restReplayStanza=<string>
* This points to stanza which can override the
[global]/defaultRestReplayStanza
value on a per endpoint/regex basis
* Defaults to empty
capability=<capabilityName>
capability.<post|delete|get|put>=<capabilityName>
* Depending on the HTTP method, check capabilities on the authenticated
session user.
* If you use 'capability.post|delete|get|put,' then the associated
method is
checked against the authenticated user's role.
* If you just use 'capability,' then all calls get checked against this
capability (regardless of the HTTP method).
* Capabilities can also be expressed as a boolean expression. Supported
658
operators
include: or, and, ()
acceptFrom=<network_acl> ...
* Lists a set of networks or addresses to allow this endpoint to be
accessed
from.
* This shouldn't be confused with the setting of the same name in the
[httpServer] stanza of server.conf which controls whether a host can
make HTTP requests at all
* Each rule can be in the following forms:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A CIDR block of addresses (examples: "10/8", "fe80:1234/32")
3. A DNS name, possibly with a '*' used as a wildcard (examples:
"myhost.example.com", "*.splunk.com")
4. A single '*' which matches anything
* Entries can also be prefixed with '!' to cause the rule to reject the
connection. Rules are applied in order, and the first one to match is
used. For example, "!10.1/16, *" will allow connections from
everywhere
except the 10.1.*.* network.
* Defaults to "*" (accept from anywhere)
includeInAccessLog=[true|false]
* If this is set to false, requests to this endpoint will not appear
in splunkd_access.log
* Defaults to 'true'.
###########################
# Per-endpoint stanza
# Specify a handler and other handler-specific settings.
# The handler is responsible for implementing arbitrary namespace
underneath
# each REST endpoint.
[script:<uniqueName>]
* NOTE: The uniqueName must be different for each handler.
* Call the specified handler when executing this endpoint.
* The following attribute/value pairs support the script handler.
scripttype=python
* Tell the system what type of script to execute when using this
endpoint.
* Defaults to python.
* If set to "persist" it will run the script via a persistent-process
that
uses the protocol from persistconn/appserver.py.
handler=<SCRIPT>.<CLASSNAME>
* The name and class name of the file to execute.
* The file *must* live in an application's bin subdirectory.
* For example, $SPLUNK_HOME/etc/apps/<APPNAME>/bin/TestHandler.py has a
659
class
called MyHandler (which, in the case of python must be derived from a
base
class called 'splunk.rest.BaseRestHandler'). The tag/value pair for
this is:
"handler=TestHandler.MyHandler".
script.arg.<N>=<string>
* Only has effect for scripttype=persist.
* List of arguments which are passed to the driver to start the script
.
* The script can make use of this information however it wants.
* Environment variables are substituted.
script.param=<string>
* Optional.
* Only has effect for scripttype=persist.
* Free-form argument that is passed to the driver when it starts the
script.
* The script can make use of this information however it wants.
* Environment variables are substituted.
output_modes=<csv list>
* Specifies which output formats can be requested from this endpoint.
* Valid values are: json, xml.
* Defaults to xml.
passSystemAuth=<bool>
* Specifies whether or not to pass in a system-level authentication
token on
each request.
* Defaults to false.
driver=<path>
* For scripttype=persist, specifies the command to start a persistent
server for this process.
* Endpoints that share the same driver configuration can share
processes.
660
* Environment variables are substituted.
* Defaults to using the persistconn/appserver.py server.
driver.arg.<n> = <string>
* For scripttype=persist, specifies the command to start a persistent
server for this process.
* Environment variables are substituted.
* Only takes effect when "driver" is specifically set.
driver.env.<name>=<value>
* For scripttype=persist, specifies an environment variable to set when
running
the driver process.
passConf=<bool>
* If set, the script is sent the contents of this configuration stanza
as part of the request.
* Only has effect for scripttype=persist.
* Defaults to true.
passSession=<bool>
* If set to true, sends the driver information about the user's
session. This includes the user's name, an active authtoken,
and other details.
* Only has effect for scripttype=persist.
* Defaults to true.
passHttpHeaders=<bool>
* If set to true, sends the driver the HTTP headers of the request.
* Only has effect for scripttype=persist.
* Defaults to false.
passHttpCookies=<bool>
* If set to true, sends the driver the HTTP cookies of the request.
* Only has effect for scripttype=persist.
* Defaults to false.
#############################
# 'admin'
# The built-in handler for the Extensible Administration Interface.
# Exposes the listed EAI handlers at the given URL.
#
[admin:<uniqueName>]
661
match=<partial URL>
* URL which, when accessed, will display the handlers listed below.
members=<csv list>
* List of handlers to expose at this URL.
* See https://localhost:8089/services/admin for a list of all possible
handlers.
#############################
# 'admin_external'
# Register Python handlers for the Extensible Administration Interface.
# Handler will be exposed via its "uniqueName".
#
[admin_external:<uniqueName>]
handlertype=<script type>
* Currently only the value 'python' is valid.
handlerfile=<unique filename>
* Script to execute.
* For bin/myAwesomeAppHandler.py, specify only myAwesomeAppHandler.py.
handlerpersistentmode=[true|false]
* Set to true to run the script in persistent mode and keep the process
running
between requests.
#########################
# Validation stanzas
# Add stanzas using the following definition to add arg validation to
# the appropriate EAI handlers.
[validation:<handler-name>]
<field> = <validation-rule>
* <field> is the name of the field whose value would be validated when
an
object is being saved.
* <validation-rule> is an eval expression using the validate() function
to
evaluate arg correctness and return an error message. If you use a
boolean
returning function, a generic message is displayed.
* <handler-name> is the name of the REST endpoint which this stanza
applies to
662
handler-name is what is used to access the handler via
/servicesNS/<user>/<app/admin/<handler-name>.
* For example:
action.email.sendresult = validate(
isbool('action.email.sendresults'), "'action.email.sendresults' must be
a boolean value").
* NOTE: use ' or $ to enclose field names that contain non alphanumeric
characters.
#############################
# 'eai'
# Settings to alter the behavior of EAI handlers in various ways.
# These should not need to be edited by users.
#
showInDirSvc = [true|false]
* Whether configurations managed by this handler should be enumerated
via the
directory service, used by SplunkWeb's "All Configurations" management
page.
Defaults to false.
#############################
# Miscellaneous
# The un-described parameters in these stanzas all operate according to
the
# descriptions listed under "script:", above.
# These should not need to be edited by users - they are here only to
quiet
# down the configuration checker.
#
[input:...]
dynamic = [true|false]
* If set to true, listen on the socket for data.
* If false, data is contained within the request body.
* Defaults to false.
[peerupload:...]
path = <directory path>
* Path to search through to find configuration bundles from search
peers.
untar = [true|false]
663
* Whether or not a file should be untarred once the transfer is
complete.
[restreplayshc]
methods = <comma separated strings>
* REST methods which will be replayed. POST, PUT, DELETE, HEAD, GET are
the
available options
[proxy:appsbrowser]
destination = <splunkbaseAPIURL>
* protocol, subdomain, domain, port, and path of the splunkbase api used
to browse apps
* Defaults to https://splunkbase.splunk.com/api
restmap.conf.example
# Version 7.2.1
#
# This file contains example REST endpoint configurations.
#
# To use one or more of these configurations, copy the configuration
block into
# restmap.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
664
# The following are default REST configurations. To create your own
endpoints,
# modify the values by following the spec outlined in
restmap.conf.spec.
#
/////////////////////////////////////////////////////////////////////////////
# global settings
#
/////////////////////////////////////////////////////////////////////////////
[global]
#
/////////////////////////////////////////////////////////////////////////////
# internal C++ handlers
# NOTE: These are internal Splunk-created endpoints. 3rd party
developers can
# only use script or search can be used as handlers.
# (Please see restmap.conf.spec for help with configurations.)
#
/////////////////////////////////////////////////////////////////////////////
[SBA:sba]
match=/properties
capability=get_property_map
[asyncsearch:asyncsearch]
match=/search
capability=search
[indexing-preview:indexing-preview]
match=/indexing/preview
capability=(edit_monitor or edit_sourcetypes) and (edit_user and
edit_tcp)
665
savedsearches.conf
The following are the spec and example files for savedsearches.conf.
savedsearches.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for saved search
entries in
# savedsearches.conf. You can configure saved searches by creating
your own
# savedsearches.conf.
#
# There is a default savedsearches.conf in
$SPLUNK_HOME/etc/system/default. To
# set custom configurations, place a savedsearches.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# savedsearches.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
666
The possible attribute/value pairs for savedsearches.conf are:
[<stanza name>]
* Create a unique stanza name for each saved search.
* Follow the stanza name with any number of the following
attribute/value
pairs.
* If you do not specify an attribute, Splunk uses the default.
disabled = [0|1]
* Disable your search by setting to 1.
* A disabled search cannot run until it is enabled.
* This setting is typically used to keep a scheduled search from running
on
its schedule without deleting the search definition.
* Defaults to 0.
search = <string>
* Actual search terms of the saved search.
* For example, search = index::sampledata http NOT 500.
* Your search can include macro searches for substitution.
* To learn more about creating a macro search, search the documentation
for
"macro search."
* Multi-line search strings currently have some limitations. For
example use
with the search command '|savedseach' does not currently work with
multi-line
search strings.
* Defaults to empty string.
dispatchAs = [user|owner]
* When the saved search is dispatched via the
"saved/searches/{name}/dispatch"
endpoint, this setting controls, what user that search is dispatched
as.
* This setting is only meaningful for shared saved searches.
* When dispatched as user it will be executed as if the requesting user
owned
the search.
* When dispatched as owner it will be executed as if the owner of the
search
dispatched it no matter what user requested it.
* If the 'force_saved_search_dispatch_as_user' attribute, in the
limits.conf
file, is set to true then the dispatchAs attribute is reset to 'user'
while
the saved search is dispatching.
* Defaults to owner.
667
Scheduling options
enableSched = [0|1]
* Set this to 1 to run your search on a schedule.
* Defaults to 0.
allow_skew = <percentage>|<duration-specifier>
* Allows the search scheduler to randomly distribute scheduled searches
more
evenly over their periods.
* When set to non-zero for searches with the following cron_schedule
values,
the search scheduler randomly "skews" the second, minute, and hour
that the
search actually runs on:
* * * * * Every minute.
*/M * * * * Every M minutes (M > 0).
668
0 * * * * Every hour.
0 */H * * * Every H hours (H > 0).
0 0 * * * Every day (at midnight).
* When set to non-zero for a search that has any other cron_schedule
setting,
the search scheduler can only randomly "skew" the second that the
search runs
on.
* The amount of skew for a specific search remains constant between
edits of
the search.
* An integer value followed by '%' (percent) specifies the maximum
amount of
time to skew as a percentage of the scheduled search period.
* Otherwise, use <int><unit> to specify a maximum duration. Relevant
units
are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours, d, day,
days.
(The <unit> may be omitted only when <int> is 0.)
* Examples:
100% (for an every-5-minute search) = 5 minutes maximum
50% (for an every-minute search) = 30 seconds maximum
5m = 5 minutes maximum
1h = 1 hour maximum
* A value of 0 disallows skew.
* Default is 0.
realtime_schedule = [0|1]
* Controls the way the scheduler computes the next execution time of a
scheduled search.
* If this value is set to 1, the scheduler bases its determination of
the next
scheduled search execution time on the current time.
* If this value is set to 0, the scheduler bases its determination of
the next
scheduled search on the last search execution time. This is called
continuous
scheduling.
* If set to 1, the scheduler might skip some execution periods to
make sure
that the scheduler is executing the searches running over the most
recent
time range.
* If set to 0, the scheduler never skips scheduled execution periods.
* However, the execution
of the saved search might fall behind depending on the scheduler's
669
load.
Use continuous scheduling whenever you enable the summary index
option.
* The scheduler tries to execute searches that have realtime_schedule
set to 1
before it executes searches that have continuous scheduling
(realtime_schedule = 0).
* Defaults to 1
670
* When schedule_window is non-zero, it indicates to the scheduler that
the
search does not require a precise start time. This gives the scheduler
greater flexibility when it prioritizes searches.
* When schedule_window is set to an integer greater than 0, it specifies
the
"window" of time (in minutes) a search may start within.
+ The schedule_window must be shorter than the period of the search.
+ Schedule windows are not recommended for searches that run every
minute.
* When set to 0, there is no schedule window. The scheduler starts the
search
as close to its scheduled time as possible.
* When set to "auto," the scheduler calculates the schedule_window value
automatically.
+ For more information about this calculation, see the search
scheduler
documentation.
* Defaults to 0 for searches that are owned by users with the
edit_search_schedule_window capability. For such searches, this value
can be
changed.
* Defaults to "auto" for searches that are owned by users that do not
have the
edit_search_window capability. For such searches, this setting cannot
be
changed.
* A non-zero schedule_window is mutually exclusive with a non-default
schedule_priority (see schedule_priority for details).
671
Notification options
quantity = <integer>
* Specifies a value for the counttype and relation, to determine the
condition
under which an alert is triggered by a saved search.
* You can think of it as a sentence constructed like this: <counttype>
<relation> <quantity>.
* For example, "number of events [is] greater than 10" sends an alert
when the
count of events is larger than by 10.
* For example, "number of events drops by 10%" sends an alert when the
count of
events drops by 10%.
* Defaults to an empty string.
#*******
# generic action settings.
# For a comprehensive list of actions and their arguments, refer to
# alert_actions.conf.
#*******
action.<action_name> = 0 | 1
* Indicates whether the action is enabled or disabled for a particular
saved
search.
* The action_name can be: email | populate_lookup | script |
summary_index
672
* For more about your defined alert actions see alert_actions.conf.
* Defaults to an empty string.
action.<action_name>.<parameter> = <value>
* Overrides an action's parameter (defined in alert_actions.conf) with
a new
<value> for this saved search only.
* Defaults to an empty string.
action.email = 0 | 1
* Enables or disables the email action.
* Defaults to 0.
action.email.subject = <string>
* Set the subject of the email delivered to recipients.
* Defaults to SplunkAlert-<savedsearchname> (or whatever is set
in alert_actions.conf).
action.email.mailserver = <string>
* Set the address of the MTA server to be used to send the emails.
* Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf).
action.email.maxresults = <integer>
* Set the maximum number of results to be emailed.
* Any alert-level results threshold greater than this number will be
capped at
this level.
* This value affects all methods of result inclusion by email alert:
inline,
CSV and PDF.
* Note that this setting is affected globally by "maxresults" in the
673
[email]
stanza of alert_actions.conf.
* Defaults to 10000
action.email.include.results_link = [1|0]
* Specify whether to include a link to search results in the
alert notification email.
* Defaults to 1 (or whatever is set in alert_actions.conf).
action.email.include.search = [1|0]
* Specify whether to include the query whose results triggered the
email.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.include.trigger = [1|0]
* Specify whether to include the alert trigger condition.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.include.trigger_time = [1|0]
* Specify whether to include the alert trigger time.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.include.view_link = [1|0]
* Specify whether to include saved search title and a link for editing
the saved search.
* Defaults to 1 (or whatever is set in alert_actions.conf).
action.email.inline = [1|0]
* Specify whether to include search results in the body of the
alert notification email.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.sendcsv = [1|0]
* Specify whether to send results as a CSV file.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.sendpdf = [1|0]
* Specify whether to send results as a PDF file.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.sendresults = [1|0]
* Specify whether to include search results in the
alert notification email.
* Defaults to 0 (or whatever is set in alert_actions.conf).
674
action.script = 0 | 1
* Enables or disables the script action.
* 1 to enable, 0 to disable.
* Defaults to 0
action.lookup = 0 | 1
* Enables or disables the lookup action.
* 1 to enable, 0 to disable.
* Defaults to 0
action.lookup.append = 0 | 1
* Specify whether to append results to the lookup file defined for the
action.lookup.filename attribute.
* Defaults to 0.
action.summary_index = 0 | 1
* Enables or disables the summary index action.
* Defaults to 0.
action.summary_index._name = <index>
* Specifies the name of the summary index where the results of the
scheduled
search are saved.
* Defaults to summary.
675
action.summary_index.inline = <bool>
* Determines whether to execute the summary indexing action as part of
the
scheduled search.
* NOTE: This option is considered only if the summary index action is
enabled
and is always executed (in other words, if counttype = always).
* Defaults to true.
action.summary_index.<field> = <string>
* Specifies a field/value pair to add to every event that gets summary
indexed
by this search.
* You can define multiple field/value pairs for a single summary index
search.
action.populate_lookup = 0 | 1
* Enables or disables the lookup population action.
* Defaults to 0.
action.populate_lookup.dest = <string>
* Can be one of the following two options:
* A lookup name from transforms.conf. The lookup name cannot be
associated with KV store.
* A path to a lookup .csv file that Splunk should copy the search
results to,
relative to $SPLUNK_HOME.
* NOTE: This path must point to a .csv file in either of the
following
directories:
* etc/system/lookups/
* etc/apps/<app-name>/lookups
* NOTE: the destination directories of the above files must
already exist
* Defaults to empty string.
676
* We recommend that you set run_on_startup to true for scheduled
searches that
populate lookup tables or generate artifacts used by dashboards.
* Defaults to false.
dispatch.ttl = <integer>[p]
* Indicates the time to live (in seconds) for the artifacts of the
scheduled
search, if no actions are triggered.
* If the integer is followed by the letter 'p' Splunk interprets the ttl
as a
multiple of the scheduled search's execution period (e.g. if the
search is
scheduled to run hourly and ttl is set to 2p the ttl of the artifacts
will be
set to 2 hours).
* If an action is triggered Splunk changes the ttl to that action's
ttl. If
multiple actions are triggered, Splunk applies the largest action ttl
to the
artifacts. To set the action's ttl, refer to alert_actions.conf.spec.
* For more info on search's ttl please see limits.conf.spec [search]
ttl
* Defaults to 2p (that is, 2 x the period of the scheduled search).
dispatch.buckets = <integer>
* The maximum number of timeline buckets.
* Defaults to 0.
dispatch.max_count = <integer>
* The maximum number of results before finalizing the search.
* Defaults to 500000.
dispatch.max_time = <integer>
* Indicates the maximum amount of time (in seconds) before finalizing
the
search.
* Defaults to 0.
677
dispatch.lookups = 1| 0
* Enables or disables lookups for this search.
* Defaults to 1.
dispatch.earliest_time = <time-str>
* Specifies the earliest time for this search. Can be a relative or
absolute
time.
* If this value is an absolute time, use the dispatch.time_format to
format the
value.
* Defaults to empty string.
dispatch.latest_time = <time-str>
* Specifies the latest time for this saved search. Can be a relative or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the
value.
* Defaults to empty string.
dispatch.index_earliest= <time-str>
* Specifies the earliest index time for this search. Can be a relative
or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the
value.
* Defaults to empty string.
dispatch.index_latest= <time-str>
* Specifies the latest index time for this saved search. Can be a
relative or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the
value.
* Defaults to empty string.
dispatch.spawn_process = 1 | 0
* Specifies whether Splunk spawns a new search process when this saved
search
is executed.
* Default is 1.
dispatch.auto_cancel = <int>
678
* If specified, the job automatically cancels after this many seconds of
inactivity. (0 means never auto-cancel)
* Default is 0.
dispatch.auto_pause = <int>
* If specified, the search job pauses after this many seconds of
inactivity. (0
means never auto-pause.)
* To restart a paused search job, specify unpause as an action to POST
search/jobs/{search_id}/control.
* auto_pause only goes into effect once. Unpausing after auto_pause does
not
put auto_pause into effect again.
* Default is 0.
dispatch.reduce_freq = <int>
* Specifies how frequently Splunk should run the MapReduce reduce phase
on
accumulated map values.
* Defaults to 10.
dispatch.rt_backfill = <bool>
* Specifies whether to do real-time window backfilling for scheduled
real time
searches
* Defaults to false.
dispatch.indexedRealtime = <bool>
* Specifies whether to use indexed-realtime mode when doing realtime
searches.
* Overrides the setting in the limits.conf file for the
indexed_realtime_use_by_default
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more
information.
* Defaults to the value in the limits.conf file.
dispatch.indexedRealtimeOffset = <int>
* Controls the number of seconds to wait for disk flushes to finish.
* Overrides the setting in the limits.conf file for the
indexed_realtime_disk_sync_delay
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more
information.
* Defaults to the value in the limits.conf file.
dispatch.indexedRealtimeMinSpan = <int>
* Minimum seconds to wait between component index searches.
* Overrides the setting in the limits.conf file for the
indexed_realtime_default_span
679
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more
information.
* Defaults to the value in the limits.conf file.
dispatch.rt_maximum_span = <int>
* The max seconds allowed to search data which falls behind realtime.
* Use this setting to set a limit, after which events are not longer
considered for the result set.??
The search catches back up to the specified delay from realtime and
uses the default span.
* Overrides the setting in the limits.conf file for the
indexed_realtime_maximum_span
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more
information.
* Defaults to the value in the limits.conf file.
dispatch.sample_ratio = <int>
* The integer value used to calculate the sample ratio. The formula is 1
/ <int>.
* The sample ratio specifies the likelihood of any event being included
in the sample.
* For example, if sample_ratio = 500 each event has a 1/500 chance of
being included in the sample result set.
* Defaults to 1.
restart_on_searchpeer_add = 1 | 0
* Specifies whether to restart a real-time search managed by the
scheduler when
a search peer becomes available for this saved search.
* NOTE: The peer can be a newly added peer or a peer that has been down
and has
become available.
* Defaults to 1.
auto_summarize = <bool>
* Whether the scheduler should ensure that the data for this search is
automatically summarized
* Defaults to false.
auto_summarize.command = <string>
* A search template to be used to construct the auto summarization for
this
680
search.
* DO NOT change unless you know what you're doing
auto_summarize.cron_schedule = <cron-string>
* Cron schedule to be used to probe/generate the summaries for this
search
auto_summarize.dispatch.<arg-name> = <string>
* Any dispatch.* options that need to be overridden when running the
summary
search.
auto_summarize.suspend_period = <time-specifier>
* Amount of time to suspend summarization of this search if the
summarization
is deemed unhelpful
* Defaults to 24h
681
Note
that this is an approximate time and the summarize search will be
stopped at
clean bucket boundaries.
* Defaults to: 3600
auto_summarize.hash = <string>
auto_summarize.normalized_hash = <string>
* These are auto generated settings.
alert.suppress = 0 | 1
* Specifies whether alert suppression is enabled for this scheduled
search.
* Defaults to 0.
alert.suppress.period = <time-specifier>
* Sets the suppression period. Use [number][time-unit] to specify a
time.
* For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour
etc
* Honored if and only if alert.suppress = 1
* Defaults to empty string.
alert.suppress.fields = <comma-delimited-field-list>
* List of fields to use when suppressing per-result alerts. This field
*must*
be specified if the digest mode is disabled and suppression is
enabled.
* Defaults to empty string.
alert.severity = <int>
* Sets the alert severity level.
* Valid values are: 1-debug, 2-info, 3-warn, 4-error, 5-severe, 6-fatal
* Defaults to 3.
alert.expires = <time-specifier>
* Sets the period of time to show the alert in the dashboard. Use
[number][time-unit]
to specify a time.
* For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour
682
etc
* Defaults to 24h.
* This property is valid until splunkd restarts. Restart clears the
listing of
triggered alerts.
alert.display_view = <string>
* Name of the UI view where the emailed link for per result alerts
should point to.
* If not specified, the value of request.ui_dispatch_app will be used,
if that
is missing then "search" will be used
* Defaults to empty string
alert.managedBy = <string>
* Specifies the feature/component that created the alert.
* Defaults to empty string.
UI-specific settings
displayview =<string>
* Defines the default UI view name (not label) in which to load the
results.
* Accessibility is subject to the user having sufficient permissions.
* Defaults to empty string.
vsid = <string>
* Defines the viewstate id associated with the UI view listed in
'displayview'.
* Must match up to a stanza in viewstates.conf.
683
* Defaults to empty string.
description = <string>
* Human-readable description of this saved search.
* Defaults to empty string.
request.ui_dispatch_app = <string>
* Specifies a field used by Splunk UI to denote the app this search
should be
dispatched in.
* Defaults to empty string.
request.ui_dispatch_view = <string>
* Specifies a field used by Splunk UI to denote the view this search
should be
displayed in.
* Defaults to empty string.
# General options
display.general.enablePreview = 0 | 1
display.general.type = [events|statistics|visualizations]
display.general.timeRangePicker.show = 0 | 1
display.general.migratedFromViewState = 0 | 1
display.general.locale = <string>
# Event options
display.events.fields = [<string>(, <string>)*]
display.events.type = [raw|list|table]
display.events.rowNumbers = 0 | 1
display.events.maxLines = <int>
display.events.raw.drilldown = [inner|outer|full|none]
display.events.list.drilldown = [inner|outer|full|none]
display.events.list.wrap = 0 | 1
display.events.table.drilldown = 0 | 1
display.events.table.wrap = 0 | 1
# Statistics options
684
display.statistics.rowNumbers = 0 | 1
display.statistics.wrap = 0 | 1
display.statistics.overlay = [none|heatmap|highlow]
display.statistics.drilldown = [row|cell|none]
display.statistics.totalsRow = 0 | 1
display.statistics.percentagesRow = 0 | 1
display.statistics.show = 0 | 1
# Visualization options
display.visualizations.trellis.enabled = 0 | 1
display.visualizations.trellis.scales.shared = 0 | 1
display.visualizations.trellis.size = [small|medium|large]
display.visualizations.trellis.splitBy = <string>
display.visualizations.show = 0 | 1
display.visualizations.type = [charting|singlevalue|mapping|custom]
display.visualizations.chartHeight = <int>
display.visualizations.charting.chart =
[line|area|column|bar|pie|scatter|bubble|radialGauge|fillerGauge|markerGauge]
display.visualizations.charting.chart.stackMode =
[default|stacked|stacked100]
display.visualizations.charting.chart.nullValueMode =
[gaps|zero|connect]
display.visualizations.charting.chart.overlayFields = <string>
display.visualizations.charting.drilldown = [all|none]
display.visualizations.charting.chart.style = [minimal|shiny]
display.visualizations.charting.layout.splitSeries = 0 | 1
display.visualizations.charting.layout.splitSeries.allowIndependentYRanges
= 0 | 1
display.visualizations.charting.legend.mode = [standard|seriesCompare]
display.visualizations.charting.legend.placement =
[right|bottom|top|left|none]
display.visualizations.charting.legend.labelStyle.overflowMode =
[ellipsisEnd|ellipsisMiddle|ellipsisStart]
display.visualizations.charting.axisTitleX.text = <string>
display.visualizations.charting.axisTitleY.text = <string>
display.visualizations.charting.axisTitleY2.text = <string>
display.visualizations.charting.axisTitleX.visibility =
[visible|collapsed]
display.visualizations.charting.axisTitleY.visibility =
[visible|collapsed]
display.visualizations.charting.axisTitleY2.visibility =
[visible|collapsed]
display.visualizations.charting.axisX.scale = linear|log
display.visualizations.charting.axisY.scale = linear|log
display.visualizations.charting.axisY2.scale = linear|log|inherit
display.visualizations.charting.axisX.abbreviation = none|auto
display.visualizations.charting.axisY.abbreviation = none|auto
display.visualizations.charting.axisY2.abbreviation = none|auto
display.visualizations.charting.axisLabelsX.majorLabelStyle.overflowMode
= [ellipsisMiddle|ellipsisNone]
display.visualizations.charting.axisLabelsX.majorLabelStyle.rotation =
[-90|-45|0|45|90]
685
display.visualizations.charting.axisLabelsX.majorUnit = <float> | auto
display.visualizations.charting.axisLabelsY.majorUnit = <float> | auto
display.visualizations.charting.axisLabelsY2.majorUnit = <float> | auto
display.visualizations.charting.axisX.minimumNumber = <float> | auto
display.visualizations.charting.axisY.minimumNumber = <float> | auto
display.visualizations.charting.axisY2.minimumNumber = <float> | auto
display.visualizations.charting.axisX.maximumNumber = <float> | auto
display.visualizations.charting.axisY.maximumNumber = <float> | auto
display.visualizations.charting.axisY2.maximumNumber = <float> | auto
display.visualizations.charting.axisY2.enabled = 0 | 1
display.visualizations.charting.chart.sliceCollapsingThreshold =
<float>
display.visualizations.charting.chart.showDataLabels =
[all|none|minmax]
display.visualizations.charting.gaugeColors = [<hex>(, <hex>)*]
display.visualizations.charting.chart.rangeValues = [<string>(,
<string>)*]
display.visualizations.charting.chart.bubbleMaximumSize = <int>
display.visualizations.charting.chart.bubbleMinimumSize = <int>
display.visualizations.charting.chart.bubbleSizeBy = [area|diameter]
display.visualizations.charting.fieldDashStyles = <string>
display.visualizations.charting.lineWidth = <float>
display.visualizations.custom.drilldown = [all|none]
display.visualizations.custom.height = <int>
display.visualizations.custom.type = <string>
display.visualizations.singlevalueHeight = <int>
display.visualizations.singlevalue.beforeLabel = <string>
display.visualizations.singlevalue.afterLabel = <string>
display.visualizations.singlevalue.underLabel = <string>
display.visualizations.singlevalue.unit = <string>
display.visualizations.singlevalue.unitPosition = [before|after]
display.visualizations.singlevalue.drilldown = [all|none]
display.visualizations.singlevalue.colorMode = [block|none]
display.visualizations.singlevalue.rangeValues = [<string>(,
<string>)*]
display.visualizations.singlevalue.rangeColors = [<string>(,
<string>)*]
display.visualizations.singlevalue.trendInterval = <string>
display.visualizations.singlevalue.trendColorInterpretation =
[standard|inverse]
display.visualizations.singlevalue.showTrendIndicator = 0 | 1
display.visualizations.singlevalue.showSparkline = 0 | 1
display.visualizations.singlevalue.trendDisplayMode =
[percent|absolute]
display.visualizations.singlevalue.colorBy = [value|trend]
display.visualizations.singlevalue.useColors = 0 | 1
display.visualizations.singlevalue.numberPrecision =
[0|0.0|0.00|0.000|0.0000]
display.visualizations.singlevalue.useThousandSeparators = 0 | 1
display.visualizations.mapHeight = <int>
display.visualizations.mapping.type = [marker|choropleth]
display.visualizations.mapping.drilldown = [all|none]
686
display.visualizations.mapping.map.center = (<float>,<float>)
display.visualizations.mapping.map.zoom = <int>
display.visualizations.mapping.map.scrollZoom = 0 | 1
display.visualizations.mapping.map.panning = 0 | 1
display.visualizations.mapping.choroplethLayer.colorMode =
[auto|sequential|divergent|categorical]
display.visualizations.mapping.choroplethLayer.maximumColor = <string>
display.visualizations.mapping.choroplethLayer.minimumColor = <string>
display.visualizations.mapping.choroplethLayer.colorBins = <int>
display.visualizations.mapping.choroplethLayer.neutralPoint = <float>
display.visualizations.mapping.choroplethLayer.shapeOpacity = <float>
display.visualizations.mapping.choroplethLayer.showBorder = 0 | 1
display.visualizations.mapping.markerLayer.markerOpacity = <float>
display.visualizations.mapping.markerLayer.markerMinSize = <int>
display.visualizations.mapping.markerLayer.markerMaxSize = <int>
display.visualizations.mapping.legend.placement = [bottomright|none]
display.visualizations.mapping.data.maxClusters = <int>
display.visualizations.mapping.showTiles = 0 | 1
display.visualizations.mapping.tileLayer.tileOpacity = <float>
display.visualizations.mapping.tileLayer.url = <string>
display.visualizations.mapping.tileLayer.minZoom = <int>
display.visualizations.mapping.tileLayer.maxZoom = <int>
# Patterns options
display.page.search.patterns.sensitivity = <float>
# Page options
display.page.search.mode = [fast|smart|verbose]
* This setting has no effect on saved search execution when dispatched
by the
scheduler. It only comes into effect when the search is opened in the
UI and
run manually.
display.page.search.timeline.format = [hidden|compact|full]
display.page.search.timeline.scale = [linear|log]
display.page.search.showFields = 0 | 1
display.page.search.tab = [events|statistics|visualizations|patterns]
# Deprecated
display.page.pivot.dataModel = <string>
# Format options
display.statistics.format.<index> = [color|number]
display.statistics.format.<index>.field = <string>
display.statistics.format.<index>.fields = [<string>(, <string>)*]
687
# Color format options
display.statistics.format.<index>.scale =
[category|linear|log|minMidMax|sharedCategory|threshold]
display.statistics.format.<index>.colorPalette =
[expression|list|map|minMidMax|sharedList]
688
Other settings
embed.enabled = 0 | 1
* Specifies whether a saved search is shared for access with a
guestpass.
* Search artifacts of a search can be viewed via a guestpass only if:
* A token has been generated that is associated with this saved
search.
The token is associated with a particular user and app context.
* The user to whom the token belongs has permissions to view that
search.
* The saved search has been scheduled and there are artifacts
available.
Only artifacts are available via guestpass: we never dispatch a
search.
* The save search is not disabled, it is scheduled, it is not
real-time,
and it is not an alert.
defer_scheduled_searchable_idxc = <bool>
* Specifies whether to defer a continuous saved search during a
searchable rolling restart or searchable rolling upgrade of an indexer
cluster.
* Note: When disabled, a continuous saved search might return partial
results.
* Defaults: true (enabled).
deprecated settings
sendresults = <bool>
* use action.email.sendresult
action_rss = <bool>
* use action.rss
action_email = <string>
* use action.email and action.email.to
role = <string>
* see saved search permissions
userid = <string>
* see saved search permissions
query = <string>
* use search
689
nextrun = <int>
* not used anymore, the scheduler maintains this info internally
qualifiedSearch = <string>
* not used anymore, the Splunk software computes this value during
runtime
savedsearches.conf.example
# Version 7.2.1
#
# This file contains example saved searches and alerts.
#
# To use one or more of these configurations, copy the configuration
block into
# savedsearches.conf in $SPLUNK_HOME/etc/system/local/. You must
restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
690
[KB indexed per hour last 24 hours]
search = index=_internal metrics group=per_index_thruput NOT debug NOT
sourcetype=splunk_web_access | timechart fixedrange=t span=1h
sum(kb) | rename sum(kb) as totalKB
dispatch.earliest_time = -1d
searchbnf.conf
The following are the spec and example files for searchbnf.conf.
searchbnf.conf.spec
# Version 7.2.1
#
#
# This file contain descriptions of stanzas and attribute/value pairs
for
# configuring search-assistant via searchbnf.conf
#
# There is a searchbnf.conf in $SPLUNK_HOME/etc/system/default/. It
should
# not be modified. If your application has its own custom python
search
# commands, your application can include its own searchbnf.conf to
describe
# the commands to the search-assistant.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
691
GLOBAL SETTINGS
[<search-commandname>-command]
[geocode-command]
[geocode-option]
#******************************************************************************
# The possible attributes/value pairs for searchbnf.conf
692
#******************************************************************************
syntax = <string>
* Describes the syntax of the search command. See the head of
searchbnf.conf for details.
* Required
simplesyntax = <string>
description = <string>
* Detailed text description of search command. Description can continue
on
the next line if the line ends in "\"
* Required
shortdesc = <string>
* A short description of the search command. The full DESCRIPTION
may take up too much screen real-estate for the search assistant.
* Required
example<index> = <string>
comment<index> = <string>
* 'example' should list out a helpful example of using the search
command, and 'comment' should describe that example.
* 'example' and 'comment' can be appended with matching indexes to
allow multiple examples and corresponding comments.
* For example:
example2 = geocode maxcount=4
command2 = run geocode on up to four values
example3 = geocode maxcount=-1
comment3 = run geocode on all values
usage = public|private|deprecated
* Determines if a command is public, private, depreciated. The
search assistant only operates on public commands.
* Required
693
* List of tags that describe this search command. Used to find
commands when the use enters a synonym (e.g. "graph" -> "chart")
#******************************************************************************
# Optional attributes primarily used internally at Splunk
#******************************************************************************
appears-in = <string>
category = <string>
maintainer = <string>
note = <string>
optout-in = <string>
supports-multivalue = <string>
searchbnf.conf.example
# Version 7.2.1
#
# The following are example stanzas for searchbnf.conf configurations.
#
##################
# selfjoin
##################
[selfjoin-command]
syntax = selfjoin (<selfjoin-options>)* <field-list>
shortdesc = Join results with itself.
description = Join results with itself. Must specify at least one field
to join on.
usage = public
example1 = selfjoin id
comment1 = Joins results with itself on 'id' field.
related = join
tags = join combine unite
[selfjoin-options]
syntax = overwrite=<bool> | max=<int> | keepsingle=<int>
description = The selfjoin joins each result with other results that\
have the same value for the join fields. 'overwrite' controls if\
fields from these 'other' results should overwrite fields of the\
result used as the basis for the join (default=true). max indicates\
the maximum number of 'other' results each main result can join with.\
(default = 1, 0 means no limit). 'keepsingle' controls whether or
not\
694
results with a unique value for the join fields (and thus no other\
results to join with) should be retained. (default = false)
segmenters.conf
The following are the spec and example files for segmenters.conf.
segmenters.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for configuring
# segmentation of events in segementers.conf.
#
# There is a default segmenters.conf in
$SPLUNK_HOME/etc/system/default. To set
# custom configurations, place a segmenters.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
segmenters.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
695
[<SegmenterName>]
696
LOOKAHEAD = <integer>
* Set how far into a given event (in characters) Splunk segments.
* LOOKAHEAD applied after any FILTER rules.
* To disable segmentation, set to 0.
* Defaults to -1 (read the whole event).
MINOR_LEN = <integer>
* Specify how long a minor token can be.
* Longer minor tokens are discarded without prejudice.
* Defaults to -1.
MAJOR_LEN = <integer>
* Specify how long a major token can be.
* Longer major tokens are discarded without prejudice.
* Defaults to -1.
MINOR_COUNT = <integer>
* Specify how many minor segments to create per event.
* After the specified number of minor tokens have been created, later
ones are
discarded without prejudice.
* Defaults to -1.
MAJOR_COUNT = <integer>
* Specify how many major segments are created per event.
* After the specified number of major segments have been created, later
ones
are discarded without prejudice.
* Default to -1.
segmenters.conf.example
# Version 7.2.1
#
# The following are examples of segmentation configurations.
#
# To use one or more of these configurations, copy the configuration
block into
# segmenters.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
697
# Example of a segmenter that doesn't index the date as segments in
syslog
# data:
[syslog]
FILTER = ^.*?\d\d:\d\d:\d\d\s+\S+\s+(.*)$
[limited-reach]
LOOKAHEAD = 256
[first-line]
FILTER = ^(.*?)(\n|$)
[no-segmentation]
LOOKAHEAD = 0
server.conf
The following are the spec and example files for server.conf.
server.conf.spec
# Version 7.2.1
############################################################################
# This file contains settings and values to configure server options
# in server.conf.
#
# There is a server.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a copy of server.conf in
# $SPLUNK_HOME/etc/system/local/.
#
# For examples, see server.conf.example. You must restart Splunk to
enable
# configurations.
#
698
# To learn more about configuration files (including how file
precedence is
# determined) see the Administration Manual section about configuration
# files. Splunk documentation can be found at
# https://docs.splunk.com/Documentation.
GLOBAL SETTINGS
[general]
serverName = <ASCII string>
* The name that identifies this Splunk software instance for features
such as
distributed search.
* Cannot be an empty string.
* Can contain environment variables.
* After any environment variables are expanded, the server name
(if not an IPv6 address) can only contain letters, numbers,
underscores,
dots, and dashes. The server name must start with a letter, number,
or an
underscore.
* Default: <hostname>-<user_running_splunk>
699
sessionTimeout = <nonnegative integer>[s|m|h|d]
* The amount of time before a user session times out, expressed as a
search-like time range.
* Examples include "24h" (24 hours), "3d" (3 days),
"7200s" (7200 seconds, or two hours)
* Default: "1" (1 hour)
allowRemoteLogin = always|never|requireSetPassword
* Controls remote management by restricting general login. Note that
this
does not apply to trusted SSO logins from a trustedIP.
* If set to "always", enables authentication so that all remote login
attempts
are allowed.
* If set to "never", only local logins to splunkd are allowed. Note that
this
still allows remote management through splunkweb, if splunkweb is on
the same server.
* If set to "requireSetPassword", which is the default:
* In the free license, remote login is disabled.
* In the pro license, remote login is only disabled for "admin" user
if
the default password of "admin" has not been changed.
* NOTE: As of version 7.1, Splunk software does not support the use of
default
passwords.
tar_format = gnutar|ustar
* Sets the default TAR format.
* Default: gnutar
access_logging_for_phonehome = <boolean>
* Enables/disables logging to the splunkd_access.log file for client
phonehomes.
* Default: true (logging enabled)
hangup_after_phonehome = <boolean>
* Controls whether or not the deployment server hangs up the connection
after the phonehome is done.
* By default, persistent HTTP 1.1 connections are used with the server
to
handle phonehomes. This might show higher memory usage if you have a
large
number of clients.
* If you have more than the maximum concurrent tcp connection number of
deployment clients, persistent connections do not help with the reuse
700
of
connections. In which case setting this to false helps bring down
memory
usage.
* Default: false (persistent connections for phonehome)
pass4SymmKey = <password>
* Authenticates traffic between:
* License master and its license slaves.
* Members of a cluster; see Note 1 below.
* Deployment server (DS) and its deployment clients (DCs); see Note 2
below.
* Note 1: Clustering might override the passphrase specified here, in
the [clustering] stanza. A clustering searchhead connecting to
multiple
masters might further override in the [clustermaster:stanza1] stanza.
* Note 2: By default, DS-DCs passphrase authentication is disabled.
To enable DS-DCs passphrase authentication, you must *also* add the
following line to the [broker:broker] stanza in the restmap.conf
file:
requireAuthentication = true
* In all scenarios, *every* node involved must set the same passphrase
in
the same stanzas. For example in the [general] stanza and/or
[clustering] stanza.
Otherwise, the respective communication:
- licensing and deployment in the case of the [general] stanza
- clustering in case of the [clustering] stanza)
does not proceed.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
listenOnIPv6 = no|yes|only
* By default, splunkd listens for incoming connections (both REST and
TCP inputs) using IPv4 only.
* When you set this value to "yes", splunkd simultaneously listens for
connections on both IPv4 and IPv6.
* To disable IPv4 entirely, set listenOnIPv6 to "only". This causes
splunkd
to exclusively accept connections over IPv6. You might need to change
the mgmtHostPort setting in the web.conf file.
Use '[::1]' instead of '127.0.0.1'.
* Any setting of SPLUNK_BINDIP in your environment or the
splunk-launch.conf file overrides the listenOnIPv6 value.
In this case splunkd listens on the exact address specified.
connectUsingIpVersion = auto|4-first|6-first|4-only|6-only
* When making outbound TCP connections for forwarding event data, making
distributed search requests, etc., this setting controls whether the
connections are made using IPv4 or IPv6.
* Connections to literal addresses are unaffected by this setting. For
example, if a forwarder is configured to connect to "10.1.2.3" the
701
connection is made over IPv4 regardless of this setting.
* "auto:"
* If listenOnIPv6 is set to "no", the Splunk server follows the
"4-only" behavior.
* If listenOnIPv6 is set to "yes", the Splunk server follows
"6-first"
* If listenOnIPv6 is set to "only", the Splunk server follow
"6-only" behavior.
* "4-first:" If a host is available over both IPv4 and IPv6, then
the Splunk server connects over IPv4 first and falls back to IPv6 if
the
connection fails.
* "6-first": splunkd tries IPv6 first and fallback to IPv4 on failure.
* "4-only": splunkd only attempts to make connections over IPv4.
* "6-only": splunkd only attempts to connect to the IPv6 address.
* Default: auto. This means that the Splunk server selects a reasonable
value
based on the listenOnIPv6 setting.
useHTTPServerCompression = <boolean>
* Specifies whether the splunkd HTTP server should support gzip content
encoding. For more info on how content encoding works, see Section
14.3
of Request for Comments: 2616 (RFC2616) on the World Wide Web
Consortium
(W3C) website.
* Default: true
defaultHTTPServerCompressionLevel = <integer>
* If the useHTTPServerCompression setting is enabled (which it is by
default),
this setting controls the compression level that the Splunk server
attempts to use.
* This number must be between 1 and 9.
* Higher numbers produce smaller compressed results but require more CPU
usage.
* Default: 6 (which is appropriate for most environments)
skipHTTPCompressionAcl = <network_acl>
* Lists a set of networks or addresses to skip data compression.
These are addresses that are considered so close that network speed
is
never an issue, so any CPU time spent compressing a response is
wasteful.
* Note that the server might still respond with compressed data if it
already has a compressed version of the data available.
702
* These rules are separated by commas or spaces.
* Each rule can be in the following forms:
1. A single IPv4 or IPv6 address, for example: "10.1.2.3",
"fe80::4a3"
2. A CIDR block of addresses, for example: "10/8", "fe80:1234/32"
3. A DNS name, possibly with a '*' used as a wildcard, for example:
"myhost.example.com", "*.splunk.com")
4. A single '*' which matches anything
* Entries can also be prefixed with '!' to negate their meaning.
* Default: localhost addresses
legacyCiphers = decryptOnly|disabled
* This setting controls how Splunk software handles support for
legacy encryption ciphers.
* If set to "decryptOnly", Splunk software supports decryption of
configurations that have been encrypted with legacy ciphers.
It encrypts all new configurations with newer and stronger cyphers.
* If set to "disabled", Splunk software neither encrypts nor decrypts
configurations that have been encrypted with legacy ciphers.
* Default: "decryptOnly".
site = <site-id>
* Specifies the site that this Splunk instance belongs to when multisite
is
enabled.
* Valid values for site-id include site0 to site63
* The special value "site0" can be set only on search heads or on
forwarders
that are participating in indexer discovery.
* For a search head, "site0" disables search affinity.?
* For a forwarder participating in indexer discovery, "site0" causes
the
forwarder to send data to all peer nodes across all sites.
useHTTPClientCompression = true|false|on-http|on-https
* Specifies whether gzip compression should be supported when Splunkd
acts
as a client (including distributed searches). Note: For the content to
be compressed, the HTTP server that the client is connecting to should
also support compression.
* If the connection is being made over https and
useClientSSLCompression=true, then setting
useHTTPClientCompression=true
results in double compression work without much compression gain. It
is recommended that this value be set to "on-http" (or to "true", and
useClientSSLCompression to "false").
* Default: false
embedSecret = <string>
* When using report embedding, normally the generated URLs can only
be used on the search head that they were generated on.
* If "embedSecret" is set, then the token in the URL is encrypted
703
with this key. Then other search heads with the exact same setting
can also use the same URL.
* This is needed if you want to use report embedding across multiple
nodes on a search head pool.
parallelIngestionPipelines = <integer>
* The number of discrete data ingestion pipeline sets to create for this
instance.
* A pipeline set handles the processing of data, from receiving streams
of events through event processing and writing the events to disk.
* An indexer that operates multiple pipeline sets can achieve improved
performance with data parsing and disk writing, at the cost of
additional
CPU cores.
* For most installations, the default setting of "1" is optimal.
* Use caution when changing this setting. Increasing the CPU usage for
data
ingestion reduces available CPU cores for other tasks like searching.
* NOTE: Enabling multiple ingestion pipelines can change the behavior of
some
settings in other configuration files. Each ingestion pipeline
enforces
the limits of the following settings independently:
1. maxKBps (in the limits.conf file)
2. max_fd (in the limits.conf file)
3. maxHotBuckets (in the indexes.conf file)
4. maxHotSpanSecs (in the indexes.conf file)
* Default: 1
instanceType = <string>
* Should not be modified by users.
* Informs components (such as the SplunkWeb Manager section) which
environment the Splunk server is running in, to allow for more
customized behaviors.
* Default: "download" which meanings no special behaviors
requireBootPassphrase = <boolean>
* Prompt the user for a boot passphrase when starting splunkd.
* Splunkd uses this passphrase to grant itself access to
platform-provided
secret storage facilities, like the GNOME keyring.
* For more information about secret storage, see the [secrets] stanza in
$SPLUNK_HOME/etc/system/README/authentication.conf.spec.
* Default: true, if Common Criteria mode is enabled. False if
Common Criteria mode is disabled.
remoteStorageRecreateIndexesInStandalone = <boolean>
* Controls re-creation of remote storage enabled indexes in standalone
mode.
* Default: true
cleanRemoteStorageByDefault = <boolean>
704
* Allows 'splunk clean eventdata' to clean the remote indexes when set
to true.
* Default: false
recreate_index_fetch_bucket_batch_size = <positive_integer>
* Controls the maximum number of bucket IDs to fetch from remote storage
as part of a single transaction for a remote storage enabled index.
* Only valid for standalone mode.
* Default: 500
recreate_bucket_fetch_manifest_batch_size = <positive_integer>
* Controls the maximum number of bucket manifests to fetch in parallel
from remote storage.
* Only valid for standalone mode.
* Default: 100
splunkd_stop_timeout = <positive_integer>
* The maximum time, in seconds, that splunkd waits for a graceful
shutdown to
complete before splunkd forces a stop.
* Default: 360 (6 minutes)
[deployment]
pass4SymmKey = <passphrase string>
* Authenticates traffic between the deployment server (DS) and its
deployment clients (DCs).
* By default, DS-DCs passphrase authentication key is disabled. To
enable
DS-DCs passphrase authentication, you must *also* add the
following
line to the [broker:broker] stanza in the restmap.conf file:
requireAuthentication = true
* If the key is not set in the [deployment] stanza, the key is
looked
for in the [general] stanza.
* NOTE: Unencrypted passwords must not begin with "$1$", because
this is
used by Splunk software to determine if the password is
already
encrypted.
705
SSL Configuration details
[sslConfig]
* Set SSL for communications on Splunk back-end under this stanza name.
* NOTE: To set SSL (for example HTTPS) for Splunk Web and the
browser,
use the web.conf file.
* Follow this stanza name with any number of the following
attribute/value
pairs.
* If you do not specify an entry for each attribute, the default value
is used.
enableSplunkdSSL = <boolean>
* Enables/disables SSL on the splunkd management port (8089) and KV
store
port (8191).
* NOTE: Running splunkd without SSL is not generally recommended.
* Distributed search often performs better with SSL enabled.
* Default: true
useClientSSLCompression = <boolean>
* Turns on HTTP client compression.
* Server-side compression is turned on by default. Setting this on the
client-side enables compression between server and client.
* Enabling this potentially gives you much faster distributed searches
across multiple Splunk instances.
* Default: true
useSplunkdClientSSLCompression = <boolean>
* Controls whether SSL compression is used when splunkd is acting as
an HTTP client, usually during certificate exchange, bundle
replication,
remote calls, etc.
* NOTE: This setting is effective if, and only if,
useClientSSLCompression
is set to "true".
* NOTE: splunkd is not involved in data transfer in distributed search,
the
search in a separate process is.
* Default: true
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support for incoming
connections.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions.
The version "tls"
selects all versions tls1.0 or newer.
706
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list
but does nothing.
* When configured in FIPS mode, "ssl3" is always disabled regardless
of this configuration.
* Default: The default can vary. See the 'sslVersions' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
curent default.
sslVersionsForClient = <versions_list>
* Comma-separated list of SSL versions to support for outgoing HTTP
connections
from splunkd. This includes distributed search, deployment client,
etc.
* This is usually less critical, since SSL/TLS always picks the highest
version both sides support. However, you can use this setting to
prohibit
making connections to remote servers that only support older
protocols.
* The syntax is the same as the 'sslVersions' setting above.
* NOTE: For forwarder connections, there is a separate 'sslVersions'
setting in the outputs.conf file. For connections to SAML servers,
there
is a separate 'sslVersions' setting in the authentication.conf file.
* Default: The default can vary. See the 'sslVersionsForClient' setting
in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
supportSSLV3Only = <boolean>
* DEPRECATED. SSLv2 is disabled. The exact set of SSL versions
allowed is configurable using the 'sslVersions' setting above.
sslVerifyServerCert = <boolean>
* This setting is used by distributed search and distributed
deployment clients.
* For distributed search: Used when making a search request
to another server in the search cluster.
* For distributed deployment clients: Used when polling a
deployment server.
* If set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificate is considered verified if either is matched.
* Default: false
707
* This feature does not work with the deployment server and client
communication over SSL.
* Optional.
* Default: No common name checking.
requireClientCert = <boolean>
* Requires that any HTTPS client that connects to a splunkd
internal HTTPS server has a certificate that was signed by a
CA (Certificate Authority) specified by the 'sslRootCAPath' setting.
* Used by distributed search: Splunk indexing instances must be
authenticated to connect to another splunk indexing instance.
* Used by distributed deployment: The deployment server requires that
deployment clients are authenticated before allowing them to poll for
new
configurations/applications.
* If set to "true", a client can connect ONLY if a certificate
created by our certificate authority was used on that client.
* Default: false
708
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the Elliptic Curve Diffie-Hellman (ECDH) curve
to
use for ECDH key negotiation.
* Splunk only supports named curves that have been specified by their
SHORT name.
* The list of valid named curves by their short and long names
can be obtained by running this CLI command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
serverCert = <path>
* The full path to the PEM (Privacy-Enhanced Mail) format server
certificate file.
* Certificates are auto-generated by splunkd upon starting Splunk.
* You can replace the default certificate with your own PEM
format file.
* Default: $SPLUNK_HOME/etc/auth/server.pem
sslKeysfile = <filename>
* DEPRECATED. Use the 'serverCert' setting instead.
* This file is in the directory specified by the 'caPath' setting
(see below).
* Default: server.pem
sslPassword = <password>
* Server certificate password.
* Default: "password"
sslKeysfilePassword = <password>
* DEPRECATED. Use the 'sslPassword' setting instead.
sslRootCAPath = <path>
* Full path to the root CA (Certificate Authority) certificate store
709
on the operating system.
* The <path> must refer to a PEM (Privacy-Enhanced Mail) format
file containing one or more root CA certificates concatenated
together.
* Required for Common Criteria.
* This setting is valid on Windows machines only if you have not set
'sslRootCAPathHonoredOnWindows' to "false".
* No default.
sslRootCAPathHonoredOnWindows = <boolean>
* DEPRECATED.
* Whether or not the Splunk instance respects the 'sslRootCAPath'
setting on
Windows machines.
* If you set this setting to "false", then the instance does not
respect the
'sslRootCAPath' setting on Windows machines.
* This setting is valid only on Windows, and only if you have set
'sslRootCAPath'.
* When the 'sslRootCAPath' setting is respected, the instance expects
to find
a valid PEM file with valid root certificates that are referenced by
that
path. If a valid file is not present, SSL communication fails.
* Default: true.
caCertFile = <filename>
* DEPRECATED. Use the 'sslRootCAPath' setting instead.
* Used only if 'sslRootCAPath' is not set.
* File name (relative to 'caPath') of the CA (Certificate Authority)
certificate PEM format file containing one or more certificates
concatenated together.
* Default: cacert.pem
dhFile = <path>
* PEM (Privacy-Enhanced Mail) format Diffie-Hellman(DH) parameter file
name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* No default.
caPath = <path>
* DEPRECATED. Use absolute paths for all certificate files.
* If certificate files given by other settings in this stanza are not
absolute
paths, then they are relative to this path.
* Default: $SPLUNK_HOME/etc/auth.
sendStrictTransportSecurityHeader = <boolean>
710
* If set to "true", the REST interface sends a
"Strict-Transport-Security"
header with all responses to requests made over SSL.
* This can help avoid a client being tricked later by a
Man-In-The-Middle attack to accept a non-SSL request.
However, this requires a commitment that no non-SSL web hosts
ever run on this hostname on any port. For
example, if splunkweb is in default non-SSL mode this can break the
ability of a browser to connect to it.
* NOTE: Enable with caution.
* Default: false
allowSslCompression = <boolean>
* If set to "true", the server allows clients to negotiate
SSL-layer data compression.
* KV Store also observes this setting.
* If set to "false", KV Store disables TLS compression.
* Default: true
allowSslRenegotiation = <boolean>
* In the SSL protocol, a client may request renegotiation of the
connection settings from time to time.
* If set to "false", causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Default: true
sslClientSessionPath = <path>
* Path where all client sessions are stored for session re-use.
* Used if 'useSslClientSessionCache' is set to "true".
* No default.
useSslClientSessionCache = <boolean>
* Specifies whether to re-use client session.
* When set to "true", client sessions are stored in memory for
session re-use. This reduces handshake time, latency and
computation time to improve SSL performance.
* When set to "false", each SSL connection performs a full
SSL handshake.
* Default: false
sslServerSessionTimeout = <integer>
* Timeout, in seconds, for newly created session.
* If set to "0", disables Server side session cache.
* The openssl default is 300 seconds.
* Default: 300 (5 minutes)
711
Splunkd http proxy configuration
[proxyConfig]
http_proxy = <string>
* If set, splunkd sends all HTTP requests through the proxy server
that you specify.
* No default.
https_proxy = <string>
* If set, splunkd sends all HTTPS requests through the proxy server
that you specify.
* If not set, splunkd uses the 'http_proxy' setting instead.
* No default.
no_proxy = <string>
* If set, splunkd uses the no_proxy rules to decide whether the proxy
server needs to be bypassed for matching hosts/IP Addresses.
Requests going to localhost/loopback address are not proxied.
* '*' (asterisk): Bypasses proxies for all requests. This is the only
wildcard, and it can be used only by itself.
* <IPv4 or IPv6 address>: Bypasses the proxy if the request is intended
for
that IP address.
* <hostname>/<domain name>: Bypasses the proxy if the request is
intended for
that host or domain name. For example:
* no_proxy = "wimpy" This matches the host name "wimpy"
* no_proxy = "splunk.com" This matches all host names in the
splunk.com
domain (apps.splunk.com, www.splunk.com, and so on.)
* If any of the rules in the list has a '*', then that rule overrides
all
other rules, and proxies are bypassed for all requests.
* Default: localhost, 127.0.0.1, ::1
[httpServer]
* Set stand-alone HTTP settings for splunkd under this stanza name.
* Follow this stanza name with any number of the following
attribute/value
pairs.
* If you do not specify an entry for each attribute, splunkd uses the
default
value.
atomFeedStylesheet = <string>
712
* Defines the stylesheet relative URL to apply to default Atom feeds.
* Set to 'none' to stop writing out xsl-stylesheet directive.
* Default: /static/atom.xsl
follow-symlinks = <boolean>
* Specifies whether the static file handler (serving the '/static'
directory) follows filesystem symlinks when serving files.
* Default: false
disableDefaultPort = <boolean>
* If set to "true", turns off listening on the splunkd management port,
which is 8089 by default.
* NOTE: Changing this setting is not recommended.
* This is the general communication path to splunkd. If it is
disabled,
there is no way to communicate with a running splunk.
* This means many command line splunk invocations cannot function,
splunkweb cannot function, the REST interface cannot function, etc.
* If you choose to disable the port anyway, understand that you are
selecting reduced Splunk functionality.
* Default: false
713
* When uploading data to http server, if the http server is unable
to write data to the receiver for the specified value, the operation
aborts.
* Default: 5
max_content_length = <integer>
* Maximum content length, in bytes.
* HTTP requests over the size specified are rejected.
* This setting exists to avoid allocating an unreasonable amount
of memory from web requests.
* In environments where indexers have enormous amounts of RAM, this
number can be reasonably increased to handle large quantities of
bundle data.
* Default: 2147483648 (2GB)
maxSockets = <integer>
* The number of simultaneous HTTP connections that Splunk Enterprise
accepts
simultaneously. You can limit this number to constrain resource
usage.
* If set to "0", Splunk Enterprise automatically sets maxSockets to
one third of the maximum allowable open files on the host.
* If this number is less than 50, it is set to 50.
* If this number is greater than 400000, it is set to 400000.
* If set to a negative number, no limit is enforced.
* Default: 0
maxThreads = <integer>
* The number of threads that can be used by active HTTP transactions.
You can limit this number to constrain resource usage.
* If set to 0, Splunk Enterprise automatically sets the limit to
one third of the maximum allowable threads on the host.
* If this number is less than 20, it is set to 20. If this number is
greater than 150000, it is set to 150000.
* If maxSockets is not negative and maxThreads is greater than
maxSockets, then
Splunk Enterprise sets maxThreads to be equal to maxSockets.
* If set to a negative number, no limit is enforced.
* Default: 0
keepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunkd HTTP server allows a keep-alive
connection to remain idle before forcibly disconnecting it.
* If this number is less than 7200, it is set to 7200.
* Default: 7200 (12 minutes)
busyKeepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunkd HTTP server allows a keep-alive
connection to remain idle while in a busy state before forcibly
disconnecting it.
* Use caution when configuring this setting as a value that is too large
can result in file descriptor exhaustion due to idling connections.
714
* If this number is less than 12, it is set to 12.
* Default: 12
forceHttp10 = auto|never|always
* When set to "always", the REST HTTP server does not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto" it does this only if the client sent no
User-Agent header, or if the user agent is known to have bugs
in its HTTP/1.1 support.
* When set to "never" it always allows HTTP 1.1, even to
clients it suspects may be buggy.
* Default: "auto"
x_frame_options_sameorigin = <boolean>
* Adds a X-Frame-Options header set to "SAMEORIGIN" to every response
served by splunkd
* Default: true
allowEmbedTokenAuth = <boolean>
* If set to false, splunkd does not allow any access to artifacts
that previously had been explicitly shared to anonymous users.
* This effectively disables all use of the "embed" feature.
* Default: true
cliLoginBanner = <string>
* Sets a message which is added to the HTTP reply headers
of requests for authentication, and to the "server/info" endpoint
* This is printed by the Splunk CLI before it prompts
for authentication credentials. This can be used to print
715
access policy information.
* If this string starts with a '"' character, it is treated as a
CSV-style list with each line comprising a line of the message.
For example: "Line 1","Line 2","Line 3"
* No default.
allowBasicAuth = <boolean>
* Allows clients to make authenticated requests to the splunk
server using "HTTP Basic" authentication in addition to the
normal "authtoken" system
* This is useful for programmatic access to REST endpoints and
for accessing the REST API from a web browser. It is not
required for the UI or CLI.
* Default: true
basicAuthRealm = <string>
* When using "HTTP Basic" authenitcation, the 'realm' is a
human-readable string describing the server. Typically, a web
browser presents this string as part of its dialog box when
asking for the username and password.
* This can be used to display a short message describing the
server and/or its access policy.
* Default: "/splunk"
allowCookieAuth = <boolean>
* Allows clients to request an HTTP cookie from the
/services/auth/login
endpoint which can then be used to authenticate future requests
* Default: true
cookieAuthHttpOnly = <boolean>
* When using cookie based authentication, mark returned cookies
with the "httponly" flag to tell the client not to allow javascript
code to access its value
* NOTE: has no effect if allowCookieAuth=false
* Default: true
cookieAuthSecure = <boolean>
* When using cookie based authentication, mark returned cookies
with the "secure" flag to tell the client never to send it over
an unencrypted HTTP channel
* NOTE: has no effect if allowCookieAuth=false OR the splunkd REST
interface has SSL disabled
* Default: true
dedicatedIoThreads = <integer>
* If set to zero, HTTP I/O is performed in the same thread
that accepted the TCP connection.
* If set set to a non-zero value, separate threads are run
to handle the HTTP I/O, including SSL encryption.
* Typically this setting does not need to be changed. For most usage
scenarios using the same the thread offers the best performance.
716
* Default: 0
replyHeader.<name> = <string>
* Add a static header to all HTTP responses this server generates
* For example, "replyHeader.My-Header = value" causes the
response header "My-Header: value" to be included in the reply to
every HTTP request to the REST server
[httpServerListener:<ip:><port>]
* Enable the splunkd REST HTTP server to listen on an additional port
number
specified by <port>. If a non-empty <ip> is included (for example:
"[httpServerListener:127.0.0.1:8090]") the listening port is
bound only to a specific interface.
* Multiple "httpServerListener" stanzas can be specified to listen on
more ports.
* Normally, splunkd listens only on the single REST port specified in
the web.conf "mgmtHostPort" setting, and none of these stanzas need to
be present. Add these stanzas only if you want the REST HTTP server
to listen to more than one port.
ssl = <boolean>
* Toggle whether this listening ip:port uses SSL or not.
* If the main REST port is SSL (the "enableSplunkdSSL" setting in this
file's [sslConfig] stanza) and this stanza is set to "ssl=false" then
clients on the local machine such as the CLI may connect to this port.
* Default: true
listenOnIPv6 = no|yes|only
* Toggle whether this listening ip:port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used
717
connection. The input applies rules in order, and uses the first one
that
matches.
For example, "!10.1/16, *" allows connections from everywhere except
the 10.1.*.* network.
* Default: The setting in the [httpServer] stanza
[mimetype-extension-map]
* Map filename extensions to MIME type for files served from the static
file
handler under this stanza name.
<file-extension> = <MIME-type>
* Instructs the HTTP static file server to mark any files ending
in 'file-extension' with a header of 'Content-Type: <MIME-type>'.
* Default:
[mimetype-extension-map]
gif = image/gif
htm = text/html
jpg = image/jpg
png = image/png
txt = text/plain
xml = text/xml
xsl = text/xml
[stderr_log_rotation]
* Controls the data retention of the file containing all messages
written to
splunkd's stderr file descriptor (fd 2).
* Typically this is extremely small, or mostly errors and warnings from
linked libraries.
maxFileSize = <bytes>
* When splunkd_stderr.log grows larger than this value, it is rotated.
* maxFileSize is expressed in bytes.
* You might want to increase this if you are working on a problem
that involves large amounts of output to the splunkd_stderr.log file.
718
* You might want to reduce this to allocate less storage to this log
category.
* Default: 10000000 (10 si-megabytes)
checkFrequency = <seconds>
* How often. in seconds, to check the size of splunkd_stderr.log
* Larger values may result in larger rolled file sizes but take less
resources.
* Smaller values may take more resources but more accurately constrain
the
file size.
* Default: 10
[stdout_log_rotation]
* Controls the data retention of the file containing all messages
written to
splunkd's stdout file descriptor (fd 1).
* Almost always, there is nothing in this file.
maxFileSize = <bytes>
BackupIndex = <non-negative integer>
checkFrequency = <seconds>
[applicationsManagement]
* Set remote applications settings for Splunk under this stanza name.
* Follow this stanza name with any number of the following
attribute/value
pairs.
* If you do not specify an entry for each attribute, Splunk uses the
default
value.
719
allowInternetAccess = <boolean>
* Allow Splunk to access the remote applications repository.
url = <URL>
* Applications repository.
* Default: https://apps.splunk.com/api/apps
loginUrl = <URL>
* Applications repository login.
* Default: https://apps.splunk.com/api/account:login/
detailsUrl = <URL>
* Base URL for application information, keyed off of app ID.
* Default: https://apps.splunk.com/apps/id
useragent = <splunk-version>-<splunk-build-num>-<platform>
* User-agent string to use when contacting applications repository.
* <platform> includes information like operating system and CPU
architecture.
updateHost = <URL>
* Host section of URL to check for app updates, e.g.
https://apps.splunk.com
updatePath = <URL>
* Path section of URL to check for app updates
For example: /api/apps:resolve/checkforupgrade
sslVersions = <versions_list>
* Comma-separated list of SSL versions to connect to 'url'
(https://apps.splunk.com).
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: The default can vary. See the 'sslVersions' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
sslVerifyServerCert = <boolean>
720
* If this is set to true, Splunk verifies that the remote server (
specified in 'url') being connected to is a valid one (authenticated).
Both the common name and the alternate name of the server are then
checked for a match if they are specified in 'sslCommonNameToCheck'
and
'sslAltNameToCheck'. A certificate is considered verified if either
is matched.
* Default: true
caCertFile = <path>
* Full path to a CA (Certificate Authority) certificate(s) PEM format
file.
* The <path> must refer to a PEM format file containing one or more
root CA
certificates concatenated together.
* Used only if 'sslRootCAPath' is not set.
* Used for validating SSL certificate from https://apps.splunk.com/
721
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* e.g. ecdhCurves = prime256v1,secp384r1,secp521r1
* Default: The default can vary. See the 'ecdhCurves' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
Misc. configuration
[scripts]
initialNumberOfScriptProcesses = <num>
* The number of pre-forked script processes that are launched when the
system comes up. These scripts are reused when script REST endpoints
*and* search scripts are executed.
The idea is to eliminate the performance overhead of launching the
script
interpreter every time it is invoked. These processes are put in a
pool.
If the pool is completely busy when a script gets invoked, a new
processes
is fired up to handle the new invocation - but it disappears when
that
invocation is finished.
Disk usage settings (for the indexer, not for Splunk log files)
[diskUsage]
minFreeSpace = <num>|<percentage>
* Minimum free space for a partition.
* Specified as an integer that represents a size in binary
megabytes (ie MiB) or as a percentage, written as a decimal
between 0 and 100 followed by a '%' sign, for example "10%"
or "10.5%"
* If specified as a percentage, this is taken to be a percentage of
the size of the partition. Therefore, the absolute free space
required
varies for each partition depending on the size of that partition.
* Specifies a safe amount of space that must exist for splunkd to
continue
operating.
* Note that this affects search and indexing
* For search:
722
* Before attempting to launch a search, Splunk software requires this
amount of free space on the filesystem where the dispatch directory
is stored, $SPLUNK_HOME/var/run/splunk/dispatch
* Applied similarly to the search quota values in authorize.conf and
limits.conf.
* For indexing:
* Periodically, the indexer checks space on all partitions
that contain splunk indexes as specified by indexes.conf. Indexing
is paused and a ui banner + splunkd warning posted to indicate
need to clear more disk space.
* Default: 5000 (approx 5GB)
pollingFrequency = <num>
* Specifies that after every 'pollingFrequency' events are indexed,
the disk usage is checked.
* Default: 100000
pollingTimerFrequency = <num>
* Minimum time, in seconds, between two disk usage checks.
* Default: 10
Queue settings
[queue]
maxSize = [<integer>|<integer>[KB|MB|GB]]
* Specifies default capacity of a queue.
* If specified as a lone integer (for example, maxSize=1000), maxSize
indicates the maximum number of events allowed in the queue.
* If specified as an integer followed by KB, MB, or GB (for example,
maxSize=100MB), it indicates the maximum RAM allocated for queue.
* Default: 500KB
cntr_1_lookback_time = [<integer>[s|m]]
* The lookback counters are used to track the size and count (number of
elements in the queue) variation of the queues using an exponentially
moving weighted average technique. Both size and count variation
has 3 sets of counters each. The set of 3 counters is provided to be
able
to track short, medium and long term history of size/count variation.
The
user can customize the value of these counters or lookback time.
* Specifies how far into history should the size/count variation be
tracked
for counter 1.
* It must be an integer followed by [s|m] which stands for seconds and
minutes respectively.
* Default: 60s
723
cntr_2_lookback_time = [<integer>[s|m]]
* See above for explanation and usage of the lookback counter.
* Specifies how far into history should the size/count variation be
tracked
for counter 2.
* Default: 600s (10 minutes)
cntr_3_lookback_time = [<integer>[s|m]]
* See above for explanation and usage of the lookback counter..
* Specifies how far into history should the size/count variation be
tracked
for counter 3.
* Default: 900s (15 minutes).
sampling_interval = [<integer>[s|m]]
* The lookback counters described above collects the size and count
measurements for the queues. This specifies at what interval the
measurement collection happens. Note that for a particular queue all
the counters sampling interval is same.
* It needs to be specified via an integer followed by [s|m] which stands
for
seconds and minutes respectively.
* Default: 1s
[queue=<queueName>]
maxSize = [<integer>|<integer>[KB|MB|GB]]
* Specifies the capacity of a queue. It overrides the default capacity
specified in the [queue] stanza.
* If specified as a lone integer (for example, maxSize=1000), maxSize
indicates the maximum number of events allowed in the queue.
* If specified as an integer followed by KB, MB, or GB (for example,
maxSize=100MB), it indicates the maximum RAM allocated for queue.
* Default: The default is inherited from the 'maxSize' value specified
in the [queue] stanza.
cntr_1_lookback_time = [<integer>[s|m]]
* Same explanation as mentioned in the [queue] stanza.
* Specifies the lookback time for the specific queue for counter 1.
* Default: The default value is inherited from the
'cntr_1_lookback_time'
value that is specified in the [queue] stanza.
cntr_2_lookback_time = [<integer>[s|m]]
* Specifies the lookback time for the specific queue for counter 2.
* Default: The default value is inherited from the
'cntr_2_lookback_time'
value that is specified in the [queue] stanza.
cntr_3_lookback_time = [<integer>[s|m]]
* Specifies the lookback time for the specific queue for counter 3.
* Default: The default value is inherited from the
724
'cntr_3_lookback_time' value
that is specified in the [queue] stanza.
sampling_interval = [<integer>[s|m]]
* Specifies the sampling interval for the specific queue.
* Default: The default value is inherited from the 'sampling_interval'
value
specified in the [queue] stanza.
[pubsubsvr-http]
disabled = <boolean>
* If disabled, then http endpoint is not registered. Set this value to
'false' to expose PubSub server on http.
* Default: true
stateIntervalInSecs = <seconds>
* The number of seconds before a connection is flushed due to
inactivity.
The connection is not closed, only messages for that connection are
flushed.
* Default: 300 (5 minutes)
# [fileInput]
# outputQueue = <queue name>
* REMOVED. Historically this allowed the user to set the target queue
for the
file-input (tailing) processor, but there was no valid reason to
modify this.
* This setting is now removed, and has no effect.
* Tailing always uses the parsingQueue.
[diag]
725
# command. Generally these can be further modified by command line flags
to
# the diag command.
726
In other words: $SPLUNK_HOME/var/run/searchpeers
* consensus : Consensus protocol files produced by search head
clustering
In other words: $SPLUNK_HOME/var/run/splunk/_raft
* conf_replication_summary : Directory listing of configuration
replication summaries produced by search head
clustering
In other words:
$SPLUNK_HOME/var/run/splunk/snapshot
* rest : The contents of a variety of splunkd endpoints
Includes server status messages (system banners),
licenser banners, configured monitor inputs &
tailing
file status (progress reading input files).
* On cluster masters, also gathers master info,
fixups,
current peer list, clustered index info, current
generation, & buckets in bad stats
* On cluster slaves, also gathers local buckets &
local
slave info, and the master information remotely
from
the configured master.
* kvstore : Directory listings of the KV Store data directory
contents are gathered, in order to see filenames,
directory names, sizes, and timestamps.
* file_validate : Produce list of files that were in the install
media
which have been changed. Generally this should be
an
empty list.
727
# most of the existing ones are designed to limit the size and
collection
# time to pleasant values.
# NOTE: Most values here use underscores '_' while the command line
uses
# hyphens '-'
all_dumps = <boolean>
* This setting currently is irrelevant on UNIX platforms.
* Affects the 'log' component of diag. (dumps are written to the log dir
on Windows)
* Can be overridden with the --all-dumps command line flag.
* Normally, Splunk diag gathers only three .DMP (crash dump) files on
Windows to limit diag size.
* If this is set to true, splunk diag collects *all* .DMP files from
the log directory.
* No default. (false equivalent).
index_files = [full|manifests]
* Selects a detail level for the 'index_files' component.
* Can be overridden with the --index-files command line flag.
* If set to 'manifests', limits the index file-content collection to
just
.bucketManifest files which give some information about the general
state of
buckets in an index.
* If set to 'full', adds the collection of Hosts.data, Sources.data, and
Sourcetypes.data which indicate the breakdown of count of items by
those
categories per-bucket, and the timespans of those category entries
* 'full' can take quite some time on very large index sizes,
especially
when slower remote storage is involved.
* Default: manifests
index_listing = [full|light]
* Selects a detail level for the 'index_listing' component.
* Can be overridden with the --index-listing command line flag.
* 'light' gets directory listings (ls, or dir) of the hot/warm and cold
container directory locations of the indexes, as well as listings of
each
hot bucket.
* 'full' gets a recursive directory listing of all the contents of every
index location, which should mean all contents of all buckets.
* 'full' may take significant time as well with very large bucket
counts,
especially on slower storage.
* Default: light
728
* Can be overridden with the --etc-filesize-limit command line flag
* This value is specified in kilobytes.
* Example: 2000 - this would be approximately 2MB.
* Files in the $SPLUNK_HOME/etc directory which are larger than this
limit
is not collected in the diag.
* Diag produces a message stating that a file has been skipped for size
to the console. (In practice we found these large files are often a
surprise to the administrator and indicate problems).
* If desired, this filter may be entirely disabled by setting the value
to 0.
* Currently, as a special exception, the file
$SPLUNK_HOME?etc/system/replication/ops.json is permitted to be 10x
the
size of this limit.
* Default: 10000 (10MB)
upload_proto_host_port = <protocol://host:port>|disabled
* URI base to use for uploading files/diags to Splunk support.
* If set to disabled (override in a local/server.conf file),
effectively
disables diag upload functionality for this Splunk install.
* Modification may theoretically may permit operations with some forms
of
proxies, but diag is not specifically designed for such, and support
of proxy
configurations that do not currently work is considered an
Enhancement
Request.
* The communication path with api.splunk.com is over a simple but not
documented protocol. If for some reason you wish to accept diag
uploads into
your own systems, it probably is simpler to run diag and then upload
via
your own means independently. However if you have business reasons
that you
want this built-in, get in touch.
* Uploading to unencrypted http definitely not recommended.
* Default: https://api.splunk.com
729
SEARCHFILTERSIMPLE-<class> = regex
SEARCHFILTERLUHN-<class> = regex
* Redacts strings from ad-hoc searches logged in the audit.log and
remote_searches.log files.
* Substrings which match these regexes *inside* a search string in one
of those
two files is replaced by sequences of the character X, as in XXXXXXXX.
* Substrings which match a SEARCHFILTERLUHN regex has the contained
numbers further tested against the luhn algorithm, used for data
integrity
in mostly financial circles, such as credit card numbers. This
permits more
accurate identification of that type of data, relying less heavily on
regex
precision. See the Wikipedia article on the "Luhn algorithm" for
additional
information.
* Search string filtering is entirely disabled if
--no-filter-searchstrings is
used on the command line.
* NOTE: That matching regexes must take care to match only the bytes of
the
term. Each match "consumes" a portion of the search string, so
matches that
extend beyond the term (for example, to adjacent whitespace) could
prevent
subsequent matches, and/or redact data needed for troubleshooting.
* Please use a name hinting at the purpose of the filter in the <class>
component of the setting name, and consider an additional explicative
comment, even for custom local settings. This might skip inquiries
from
support.
[applicense]
appLicenseHostPort = <IP:port>
* Specifies the location of the IP address or DNS name and port of the
app
license server.
appLicenseServerPath = <path>
* Specifies the path portion of the URI of the app license server.
caCertFile = <path>
* Full path to a CA (Certificate Authority) certificate(s) PEM format
file.
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
730
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* Default: $SPLUNK_HOME/etc/auth/cacert.pem
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support.
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: The default can vary. See the 'sslVersions' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
sslVerifyServerCert = <boolean>
* If this is set to true, Splunk verifies that the remote server
(specified in 'url')
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in 'sslCommonNameToCheck' and
'sslAltNameToCheck'.
A certificate is considered verified if either is matched.
* Default: true
731
valid.
* Items in this list are never validated against the SSL Common Name.
* Default: Some alternate name checking
disabled = <boolean>
* Select true to disable this feature or false to enable this feature.
App
licensing is experimental, so it is disabled by default.
* Default: true
[license]
master_uri = [self|<uri>]
* An example of <uri>: <scheme>://<hostname>:<port>
active_group = Enterprise|Trial|Forwarder|Free
* These timeouts only matter if you have a master_uri set to remote
master
connection_timeout = 30
* Maximum time, in seconds, to wait before connection to master times
out.
send_timeout = <integer>
* Maximum time, in seconds, to wait before sending data to master times
out
* Default: 30
receive_timeout = <integer>
* Maximum time, in seconds, to wait before receiving data from master
times
out
* Default: 30
732
master.
* Default: 2000
strict_pool_quota = <boolean>
* Toggles strict pool quota enforcement
* If set to true, members of pools receive warnings for a given day if
usage exceeds pool size regardless of whether overall stack quota was
exceeded
* If set to false, members of pool only receive warnings if both pool
usage exceeds pool size AND overall stack usage exceeds stack size
* Default: true
pool_suggestion = <string>
* Suggest a pool to the master for this slave.
* The master uses this suggestion if the master doesn't have an
explicit
rule mapping the slave to a given pool (ie...no slave list for the
relevant license stack contains this slave explicitly)
* If the pool name doesn't match any existing pool, it is ignored, no
error is generated
* This setting is intended to give an alternative management option for
pool/slave mappings. When onboarding an indexer, it may be easier to
manage the mapping on the indexer itself via this setting rather than
having to update server.conf on master for every addition of new
indexer
* NOTE: If you have multiple stacks and a slave maps to multiple pools,
this
feature is limited in only allowing a suggestion of a single
pool;
This is not a common scenario however.
* No default. (which means this feature is disabled)
[lmpool:auto_generated_pool_forwarder]
* This is the auto generated pool for the forwarder stack
733
* The quota can also be specified as a specific size eg. 20MB, 1GB etc
stack_id = forwarder
* The stack to which this pool belongs.
[lmpool:auto_generated_pool_free]
* This is the auto generated pool for the free stack
* Field descriptions are the same as that for
the 'lmpool:auto_generated_pool_forwarder' setting.
[lmpool:auto_generated_pool_enterprise]
* This is the auto generated pool for the enterprise stack
* Field descriptions are the same as that for
the 'lmpool:auto_generated_pool_forwarder' setting.
[lmpool:auto_generated_pool_download_trial]
* This is the auto generated pool for the download trial stack
* Field descriptions are the same as that for
the "lmpool:auto_generated_pool_forwarder"
############################################################################
#
# Search head pooling configuration
#
# Changes to a search head's pooling configuration must be made to the
file:
#
# $SPLUNK_HOME/etc/system/local/server.conf
#
# In other words, you can not deploy the [pooling] stanza using an app,
either
# on local disk or on shared storage.
#
# This is because these values are read before the configuration system
# itself has been completely initialized. Take the value of the
'storage'
# setting, for example. This value cannot be placed in an app on
# shared storage because Splunk must use this value to find shared
storage
# in the first place!
#
############################################################################
734
[pooling]
state = [enabled|disabled]
* Enables or disables search head pooling.
* Default: disabled
app_update_triggers = true|false|silent
* Should this search head run update triggers for apps modified by other
search heads in the pool?
* For more information about update triggers specifically, see the
[triggers] stanza in the
$SPLUNK_HOME/etc/system/README/app.conf.spec
file.
* If set to true, this search head attempts to reload inputs, indexes,
custom REST endpoints, etc. stored within apps that are installed,
updated, enabled, or disabled by other search heads.
* If set to false, this search head does not run any update triggers.
Note
that this search head still detects configuration changes and app
state changes made by other search heads. It simply does not reload
any
components within Splunk that might care about those changes, like
input
processors or the HTTP server.
* If set to silent, is like setting a value of 'true', with one
difference: update triggers never result in restart banner messages
or restart warnings in the UI. Any need to restart is instead be
signaled only by messages in splunkd.log.
* Default: true
lock.logging = <boolean>
* When acquiring a file-based lock, log information into the locked
file.
* This information typically includes:
* Which host is acquiring the lock
* What that host intends to do while holding the lock
* There is no maximum filesize or rolling policy for this logging. If
you
735
enable this setting, you must periodically truncate the locked file
yourself to prevent unbounded growth.
* The information logged to the locked file is intended for debugging
purposes only. Splunk makes no guarantees regarding the contents of
the
file. It may, for example, write padding NULs to the file or truncate
the
file at any time.
* Default: false
############################################################################
# The following two intervals interrelate; the longest possible time for
a
# state change to travel from one search pool member to the rest should
be
# approximately the sum of these two timers.
############################################################################
poll.blacklist.<name> = <regex>
* Do not check configuration files for changes if they match this
regular
expression.
* Example: Do not check vim swap files for changes -- .swp$
[clustering]
mode = [master|slave|searchhead|disabled]
* Sets operational mode for this cluster node.
* Only one master may exist per cluster.
* Default: disabled
736
* The URI of the cluster master that this slave or search head
should connect to.
* An example of <uri>: <scheme>://<hostname>:<port>
* Only for 'mode=searchhead' - If the search head is a part of multiple
clusters, the master URIs can be specified by a comma separated list.
advertised_disk_capacity = <integer>
* Percentage to use when advertising disk capacity to the cluster
master.
This is useful for modifying weighted load balancing in indexer
discovery.
* For example, if you set this attribute to 50 for an indexer with a
500GB disk, the indexer advertises its disk size as 250GB, not 500GB.
* Acceptable value range is 10 to 100.
* Default: 100
pass4SymmKey = <password>
* Secret shared among the nodes in the cluster to prevent any
arbitrary node from connecting to the cluster. If a slave or
search head is not configured with the same secret as the master,
it is not able to communicate with the master.
* If it is not set in the [clustering] stanza, the key
is looked in the [general] stanza
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
* No default.
737
of time blocking other operations.
* 0 denotes that there is no max fixup timer.
* Default: 0
cxn_timeout = <integer>
* Lowlevel timeout, in seconds, for establishing connection between
cluster nodes.
* Default: 60
send_timeout = <integer>
* Lowlevel timeout, in seconds, for sending data between cluster nodes.
* Default: 60
rcv_timeout = <integer>
* Lowlevel timeout, in seconds, for receiving data between cluster
nodes.
* Default: 60
rep_cxn_timeout = <integer>
* Lowlevel timeout, in seconds, for establishing connection for
replicating data.
* Default: 5
rep_send_timeout = <integer>
* Lowlevel timeout, in seconds, for sending replication slice data
between
cluster nodes.
* This is a soft timeout. When this timeout is triggered on source peer,
it tries to determine if target is still alive. If it is still alive,
it
reset the timeout for another 'rep_send_timeout interval' and
continues. If
target has failed or cumulative timeout has exceeded the
'rep_max_send_timeout', replication fails.
* Default: 5
rep_rcv_timeout = <integer>
* Lowlevel timeout, in seconds, for receiving acknowledgment data from
peers.
* This is a soft timeout. When this timeout is triggered on source peer,
it tries to determine if target is still alive. If it is still alive,
it reset the timeout for another 'rep_send_timeout' interval and
continues.
* If target has failed or cumulative timeout has exceeded
'rep_max_rcv_timeout', replication fails.
* Default: 10
search_files_retry_timeout = <integer>
* Timeout, in seconds, after which request for search files from a
peer is aborted.
* To make a bucket searchable, search specific files are copied from
another source peer with search files. If search files on source
738
peers are undergoing chances, it asks requesting peer to retry after
some time. If cumulative retry period exceeds the specified timeout,
the requesting peer aborts the request and requests search files from
another peer in the cluster that may have search files.
* Default: 600 (10 minutes)
re_add_on_bucket_request_error = <boolean>
* Valid only for 'mode=slave'.
* If set to true, slave re-adds itself to the cluster master if
cluster master returns an error on any bucket request. On re-add,
slave updates the master with the latest state of all its buckets.
* If set to false, slave doesn't re-add itself to the cluster master.
Instead, it updates the master with those buckets that master
returned an error.
* Default: false
decommission_search_jobs_wait_secs = <integer>
* Valid only for mode=slave
* Determines maximum time, in seconds, that a peer node waits for search
jobs to finish before it transitions to the down (or)
GracefulShutdown
state, in response to the 'splunk offline' (or)
'splunk offline --enforce-counts' command.
* Default: 180 (3 minutes)
decommission_node_force_timeout = <seconds>
* Valid only for mode=slave and during node offline operation
* The maximum time, in seconds, that a peer node waits for searchable
copy reallocation
jobs to finish before it transitions to the down (or)
GracefulShutdown state.
* This period begins after the peer node receives a 'splunk offline'
command
or its '/cluster/slave/control/control/decommission' REST endpoint is
accessed.
* This attribute is not applicable to the "--enforce-counts" version
of the ?splunk offline" command
* Defaults to 300 seconds.
rolling_restart = restart|shutdown|searchable|searchable_force
* Only valid for 'mode=master'.
* Determines whether indexer peers restart or shutdown during a rolling
restart.
739
* If set to restart, each peer automatically restarts during a rolling
restart.
* If set to shutdown, each peer is stopped during a rolling restart,
and the customer must manually restart each peer.
* If set to searchable, the cluster attempts a best-effort to maintain
a searchable state during the rolling restart by reassigning primaries
from peers that are about to restart to other searchable peers, and
performing a health check to ensure that a searchable rolling restart
is
possible.
* If set to searchable_force, the cluster performs a searchable
rolling restart, but overrides the health check and enforces
'decommission_force_timeout' and 'restart_inactivity_timeout'.
* If set to searchable or searchable_force, scheduled searches
are deferred or run during the rolling restart based on the
'defer_scheduled_searchable_idx' setting in savedsearches.conf.
* Default: restart.
site_by_site = <boolean>
* Only valid for mode=master and multisite=true.
* If set to true, the master restarts peers from one site at a time,
waiting for all peers from a site to restart before moving on to
another
site, during a rolling restart.
* If set to false, the master randomly selects peers to restart, from
across all sites, during a rolling restart.
* Default: true.
740
rep_max_send_timeout = <integer>
* Maximum send timeout, in seconds, for sending replication slice
data between cluster nodes.
* On rep_send_timeout source peer determines if total send timeout has
exceeded 'rep_max_send_timeout'. If so, replication fails.
* If cumulative 'rep_send_timeout' exceeds 'rep_max_send_timeout',
replication
fails.
* Default: 180 (3 minutes)
rep_max_rcv_timeout = <integer>
* Maximum cumulative receive timeout, in seconds, for receiving
acknowledgment data from peers.
* On 'rep_rcv_timeout' source peer determines if total
receive timeout has exceeded 'rep_max_rcv_timeout'.
If so, replication fails.
* Default: 180 (3 minutes)
multisite = <boolean>
* Turns on the multisite feature for this master.
* Make sure you set site parameters on the peers when you turn this to
true.
* Default: false
741
counts (including origin).
* The difference between total and the sum of all the other counts
is distributed across the remaining sites.
* Example 1: site_replication_factor = origin:2, total:3
Given a cluster of 3 sites, all indexing data, every site has 2
copies of every bucket ingested in that site and one rawdata
copy is put in one of the other 2 sites.
* Example 2: site_replication_factor = origin:2, site3:1, total:3
Given a cluster of 3 sites, 2 of them indexing data, every
bucket has 2 copies in the origin site and one copy in site3. So
site3 has one rawdata copy of buckets ingested in both site1 and
site2 and those two sites have 2 copies of their own buckets.
* Default: origin:2, total:3
742
The bucket copies for which a decommissioned site is the origin site
are then replicated to the active site specified by the mapping.
* Used only if multisite is true and sites have been decommissioned.
* Each comma-separated entry is of the form
<decommissioned_site_id>:<active_site_id>
or default_mapping:<default_site_id>.
<decommissioned_site_id> is a decommissioned site and
<active_site_id> is
an existing site,specified in the 'available_sites' setting.
For example, if available_sites=site1,site2,site3,site4 and you
decommission site2, you can map site2 to a remaining site such as
site4,
like this: site2:site4 .
* If a site used in a mapping is later decommissioned, its previous
mappings
must be remapped to an available site. For instance, if you have the
mapping site1:site2 but site2 is later decommissioned, you can remap
both site1 and site2 to an active site3 using the following
replacement
mappings - site1:site3,site2:site3.
* Optional entry with syntax default_mapping:<default_site_id>
represents the
default mapping, for cases where an explicit mapping site is not
specified.
For example: default_mapping:site3 maps any decommissioned site to
site3,
if they are not otherwise explicitly mapped to a site.
There can only be one such entry.
* Example 1: site_mappings = site1:site3,default_mapping:site4.
The cluster must include site3 and site4 in available_sites, and site1
must be decommissioned.
The origin bucket copies for decommissioned site1 is mapped to site3.
Bucket copies for any other decommissioned sites is mapped to site4.
* Example 2: site_mappings = site2:site3
The cluster must include site3 in available_sites, and site2 must be
decommissioned. The origin bucket copies for decommissioned site2 is
mapped to site3. This cluster has no default.
* Example 3: site_mappings = default_mapping:site5
The above cluster must include site5 in available_sites.
The origin bucket copies for any decommissioned sites is mapped onto
site5.
* Default: an empty string
constrain_singlesite_buckets = <boolean>
* Only valid for mode=master and is only used if multisite is true.
* Specifies whether the cluster keeps single-site buckets within one
site
in multisite clustering.
* When this setting is "true", buckets in a single site cluster do not
replicate outside of their site. The buckets follow
'replication_factor'
'search factor' policies rather than 'site_replication_factor'
743
'site_search_factor' policies. This is to mimic the behavior of
single-site clustering.
* When this setting is "false", buckets in non-multisite clusters can
replicate across sites, and must meet the specified
'site_replication_factor' and 'site_search_factor' policies.
* Default: true
access_logging_for_heartbeats = <boolean>
* Only valid for 'mode=master'.
* Enables/disables logging to the splunkd_access.log file for peer
heartbeats.
* NOTE: you do not have to restart master to set this config parameter.
Simply run the cli command on master:
% splunk edit cluster-config -access_logging_for_heartbeats
<<boolean>>
* Default: false (logging disabled)
744
generation_poll_interval = <positive integer>
* How often, in seconds, the search head polls the master for
generation information.
* This setting is valid only if 'mode=master' or 'mode=searchhead'.
* Default: 5
max_peer_build_load = <integer>
* This is the maximum number of concurrent tasks to make buckets
searchable that can be assigned to a peer.
* Default: 2
max_peer_rep_load = <integer>
* This is the maximum number of concurrent non-streaming
replications that a peer can take part in as a target.
* Default: 5
max_peer_sum_rep_load = <integer>
* This is the maximum number of concurrent summary replications
that a peer can take part in as either a target or source.
* Default: 5
max_nonhot_rep_kBps = <integer>
* This is the maximum throughput (kB(Bytes)/s) for warm/cold/summary
* replications on a specific source peer. Similar to forwarder's maxKBps
* setting in the limits.conf file.
* This setting throttles total bandwidth consumption for all
outgoing non-hot replication connections from a given source peer.
It does not throttle at the 'per-replication-connection', per-target
level.
* This setting is reloadable without restart if manually updated on the
source peers by using the command "splunk edit cluster-config"
or by making the corresponding REST call. We don't recommend updating
this setting across all the peers using bundle push because:
1) The push requires a rolling restart, as do all bundle pushes
with the server.conf file change.
2) You might want to set different values on different peers.
* If set to 0, signifies unlimited throughput.
* Default: 0
max_replication_errors = <integer>
* Only valid for 'mode=slave'.
* This is the maximum number of consecutive replication errors
(currently only for hot bucket replication) from a source peer
to a specific target peer. Until this limit is reached, the
source continues to roll hot buckets on streaming failures to
this target. After the limit is reached, the source no
longer rolls hot buckets if streaming to this specific target
fails. This is reset if at least one successful (hot bucket)
replication occurs to this target from this source.
* The special value of 0 turns off this safeguard; so the source
always rolls hot buckets on streaming error to any target.
* Default: 3
745
searchable_targets = <boolean>
* Only valid for 'mode=master'.
* Tells the master to make some replication targets searchable
even while the replication is going on. This only affects
hot bucket replication for now.
* Default: true
searchable_target_sync_timeout = <integer>
* Only valid for 'mode=slave'.
* If a hot bucket replication connection is inactive for this time,
in seconds, a searchable target flushes out any pending search
related in-memory files.
* Regular syncing - when the data is flowing through
regularly and the connection is not inactive - happens at a
faster rate (default of 5 secs controlled by
streamingTargetTsidxSyncPeriodMsec in indexes.conf).
* The special value of 0 turns off this timeout behavior.
* Default: 60
746
* Only valid for mode=master
* Maximum no. of peers to simultaneously download the configuration
bundle
from the master, in response to the 'splunk apply cluster-bundle'
command.
* When a peer finishes the download, the next waiting peer, if any,
begins
its download.
* If set to 0, all peers try to download at once.
* Default: 0
auto_rebalance_primaries = <boolean>
* Only valid for 'mode=master'.
* Specifies if the master should automatically rebalance bucket
primaries on certain triggers. Currently the only defined
trigger is when a peer registers with the master. When a peer
registers, the master redistributes the bucket primaries so the
cluster can make use of any copies in the incoming peer.
* Default: true
idle_connections_pool_size = <integer>
* Only valid for 'mode=master'.
* Specifies how many idle http(s) connections we should keep alive to
reuse.
Reusing connections improves the time it takes to send messages to
peers
in the cluster.
* -1 corresponds to "auto", letting the master determine the
number of connections to keep around based on the number of peers in
the
cluster.
* Default: -1
use_batch_mask_changes = <boolean>
* Only valid for mode=master
* Specifies if the master should process bucket mask changes in
batch or individually one by one.
* Set to false when there are version 6.1 peers in the cluster for
backwards
compatibility.
* Default: true
summary_replication = true|false|disabled
* Valid for both 'mode=master' and 'mode=slave'.
747
* Cluster Master:
If set to true, summary replication is enabled.
If set to false, summary replication is disabled, but can be enabled
at runtime.
Ff set to disabled, summary replication is disabled. Summary
replication
cannot be enabled at runtime.
* Peers:
If set to true or false, there is no effect. The indexer follows
whatever setting is on the Cluster Master.
If set to disabled, summary replication is disabled. The indexer does
no scanning of summaries (increased performance during peers joing
the cluster for large clusters).
* Default: false (for both Cluster Master and Peers)
buckets_to_summarize = <primaries|primaries_and_hot|all>
* Only valid for 'mode=master'.
* Determines which buckets we send '| summarize' searches (searches that
build
report acceleration and data models). 'primaries' applies it to only
primary
buckets, while 'primaries_and_hot' also applies it to all hot
searchable
buckets. 'all' applies the search to all buckets.
* If 'summary_replication' is enabled, then 'buckets_to_summarize'
defaults
to 'primaries_and_hot'.
* Do not change this setting without first consulting with Splunk
Support.
748
* Default: primaries
maintenance_mode = <boolean>
* Only valid for 'mode=master'.
* To preserve the maintenance mode setting in case of master
restart, the master automatically updates this setting in the
etc/system/local/server.conf file whenever the user enables or
disables
maintenance mode using CLI or REST.
* NOTE: Do not manually update this setting. Instead use CLI or REST
to enable or disable maintenance mode.
backup_and_restore_primaries_in_maintenance = <boolean>
* Only valid for 'mode=master'.
* Determines whether the master performs a backup/restore of bucket
primary masks during maintenance mode or rolling-restart of cluster
peers.
* If set to true, restoration of primaries occurs automatically when
the peers
rejoin the cluster after a scheduled restart or upgrade.
* Default: false
allow_default_empty_p4symmkey = <boolean>
* Only valid for 'mode=master'.
* Affects behavior of master during start-up, if 'pass4SymmKey'resolves
to the null string or the default password ("changeme").
* If set to true, the master posts a warning but still launches.
* If set to false, the master posts a warning and stops.
* Default: true
749
* Only valid for 'mode=slave'.
* This is the address on which a slave is available for accepting
data from forwarder.This is useful in the cases where a splunk host
machine has multiple interfaces and only one of them can be reached by
another splunkd instance.
manual_detention = on|on_ports_enabled|off
* Only valid for 'mode=slave'.
* Puts this peer node in manual detention.
* Default: off
750
setting to
split large numbers of buckets into several"batch-add-peer" requests.
* If it is invalid or non-existant, the peer uses the default setting
instead.
* If it is set to 0, the peer sends only one request with all buckets
instead of batches.
* Default: 1000
751
recreate_bucket_fetch_manifest_batch_size = <positive integer>
* Only valid for 'mode=master'.
* Controls the maximum number of bucket IDs for which a slave
attempts to initiate a parallel fetch of manifests at a time
in the process of recreating buckets that have been
requested by the master.
* The master sends this setting to all the slaves that are
involved in the process of recreating the buckets.
* Default: 50
752
* Only valid for 'mode=slave'.
* Controls the frequency, in seconds, that the indexer handles
the following options:
1. buckets_status_notification_batch_size
2. summary_update_batch_size
3. summary_registration_batch_size
* CAUTION: Do not modify this setting without guidance from
Splunk personnel.
* Default: 10
enableS2SHeartbeat = true|false
* Only valid for 'mode=slave'.
* Splunk software monitors each replication connection for
presence of a heartbeat, and if the heartbeat is not seen for
's2sHeartbeatTimeout' seconds, it closes the connection.
* Default: true
s2sHeartbeatTimeout = <seconds>
* This specifies the global timeout value, in seconds, for monitoring
heartbeats on replication connections.
* Splunk software closes a replication connection if heartbeat is not
seen
753
for 's2sHeartbeatTimeout' seconds.
* Replication source sends heartbeats every 30 seconds.
* Default: 600 (10 minutes)
throwOnBucketBuildReadError = true|false
* Valid only for 'mode=slave'.
* If set to true, index clustering slave throws an exception if it
encounters a journal read error while building the bucket for a new
searchable copy. It also throws all the search & other files generated
so far in this particular bucket build.
* If set to false, index clustering slave just logs the error and
preserves
all the search & other files generated so far & finalizes them as it
cannot proceed further with this bucket.
* Default: false
cluster_label = <string>
* This specifies the label of the indexer cluster
[clustermaster:<stanza>]
* Only valid for 'mode=searchhead' when the search head is a part of
multiple clusters.
master_uri = <uri>
* Only valid for 'mode=searchhead' when present in this stanza.
* URI of the cluster master that this search head should connect to.
pass4SymmKey = <password>
* Secret shared among the nodes in the cluster to prevent any
arbitrary node from connecting to the cluster. If a search head
is not configured with the same secret as the master,
it not be able to communicate with the master.
* If it is not present here, the key in the clustering stanza is used.
If it is not present in the clustering stanza, the value in the
general
stanza is used.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
* No default.
site = <site-id>
* Specifies the site this search head belongs to for this particular
master
when multisite is enabled (see below).
* Valid values for site-id include site0 to site63.
* The special value "site0" disables site affinity for a search head in
a
multisite cluster.?It is only valid for a search head.
multisite = <boolean>
* Turns on the multisite feature for this master_uri for the search
head.
754
* Make sure the master has the multisite feature turned on.
* Make sure you specify the site in case this is set to true. If no
configuration is found in the [clustermaster] stanza, we default to
any
value for site that might be defined in the [general]
stanza.
* Default: false
[replication_port://<port>]
# Configure Splunk to listen on a given TCP port for replicated data
from
# another cluster member.
# If 'mode=slave' is set in the [clustering] stanza at least one
# 'replication_port' must be configured and not disabled.
disabled = true|false
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
[replication_port-ssl://<port>]
* This configuration is same as the [replication_port] stanza above,
but uses SSL.
disabled = <boolean>
* Set to true to disable this replication port stanza.
* Default: false
755
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
serverCert = <path>
* Full path to file containing private key and server certificate.
* The <path> must refer to a PEM format file.
* No default.
sslPassword = <password>
* Server certificate password, if any.
* No default.
password = <password>
* DEPRECATED; use 'sslPassword' instead.
rootCA = <path>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
* Full path to the root CA (Certificate Authority) certificate store.
* The <path> must refer to a PEM format file containing one or more
root CA
certificates concatenated together.
* No default.
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: The default can vary. See the sslVersions setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the current
default.
756
* The curves should be specified in the order of preference.
* The client sends these curves as a part of Client Hello.
* The server supports only the curves specified in the list.
* We only support named curves specified by their SHORT names.
(see struct ASN1_OBJECT in asn1.h)
* The list of valid named curves by their short/long names can be
obtained
by executing this command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* e.g. ecdhCurves = prime256v1,secp384r1,secp521r1
* Default: The default can vary. See the ecdhCurves setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the current
default.
dhFile = <path>
* PEM format Diffie-Hellman parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Not set by default.
dhfile = <path>
* DEPRECATED; use 'dhFile' instead.
supportSSLV3Only = <boolean>
* DEPRECATED. SSLv2 is now always disabled. The exact set of SSL
versions
allowed is now configurable by using the 'sslVersions' setting above.
useSSLCompression = <boolean>
* If true, enables SSL compression.
* Default: true
compressed = <boolean>
* DEPRECATED. Use 'useSSLCompression' instead.
* Used only if 'useSSLCompression' is not set.
requireClientCert = <boolean>
* Requires that any peer that connects to replication port has a
certificate
that can be validated by certificate authority specified in rootCA.
* Default: false
allowSslRenegotiation = <boolean>
* In the SSL protocol, a client may request renegotiation of the
connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Default: true
757
sslCommonNameToCheck = <commonName1>, <commonName2>, ...
* Optional.
* Check the common name of the client's certificate against this list of
names.
* requireClientCert must be set to "true" for this setting to work.
* No default.
Introspection settings
[introspection:generator:disk_objects]
* For 'introspection_generator_addon', packaged with Splunk; provides
the
data ("i-data") consumed, and reported on, by
'introspection_viewer_app'
(due to ship with a future release).
* This stanza controls the collection of i-data about: indexes; bucket
superdirectories (homePath, coldPath, ...); volumes; search dispatch
artifacts.
* On forwarders the collection of index, volumes and dispatch disk
objects
is disabled.
758
indirectly (increased disk and bandwidth utilization, to store the
produced i-data).
* Default: 600 (10 minutes)
[introspection:generator:disk_objects__indexes]
* This stanza controls the collection of i-data about indexes.
* Inherits the values of 'acquireExtra_i_data' and
'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza,
but
may be enabled/disabled independently of it.
* This stanza should only be used to force collection of i-data about
indexes on dedicated forwarders.
* Default: Data collection is disabled on universal forwarders and
enabled on all other installations.
[introspection:generator:disk_objects__volumes]
* This stanza controls the collection of i-data about volumes.
* Inherits the values of 'acquireExtra_i_data' and
'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza,
but
may be enabled/disabled independently of it.
* This stanza should only be used to force collection of i-data about
volumes on dedicated forwarders.
* Default: Data collection is disabled on universal forwarders and
enabled on all other installations.
[introspection:generator:disk_objects__dispatch]
* This stanza controls the collection of i-data about search dispatch
artifacts.
* Inherits the values of 'acquireExtra_i_data' and
'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza,
but
may be enabled/disabled independently of it.
* This stanza should only be used to force collection of i-data about
search dispatch artifacts on dedicated forwarders.
* Default: Data collection is disabled on universal forwarders and
enabled on all other installations.
[introspection:generator:disk_objects__fishbucket]
* This stanza controls the collection of i-data about:
$SPLUNK_DB/fishbucket, where we persist per-input status of
file-based
inputs.
* Inherits the values of 'acquireExtra_i_data' and
'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza,
but may
be enabled/disabled independently of it.
759
[introspection:generator:disk_objects__bundle_replication]
* This stanza controls the collection of i-data about:
bundle replication metrics of distributed search
* Inherits the values of 'acquireExtra_i_data' and
'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza,
but may
be enabled/disabled independently of it.
[introspection:generator:disk_objects__partitions]
* This stanza controls the collection of i-data about: disk partition
space
utilization.
* Inherits the values of 'acquireExtra_i_data' and
'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza,
but may
be enabled/disabled independently of it.
[introspection:generator:disk_objects__summaries]
* Introspection data about summary disk space usage. Summary disk usage
includes both data model and report summaries. The usage is collected
for each summaryId, locally at each indexer.
[introspection:generator:resource_usage]
* For 'introspection_generator_addon', packaged with Splunk; provides
the
data ("i-data") consumed, and reported on, by
'introspection_viewer_app'
(due to ship with a future release).
* "Resource Usage" here refers to: CPU usage; scheduler overhead; main
760
(physical) memory; virtual memory; pager overhead; swap; I/O; process
creation (a.k.a. forking); file descriptors; TCP sockets;
receive/transmit
networking bandwidth.
* Resource Usage i-data is collected at both hostwide and per-process
levels; the latter, only for processes associated with this
SPLUNK_HOME.
* Per-process i-data for Splunk search processes include additional,
search-specific, information.
[introspection:generator:resource_usage__iostats]
* This stanza controls the collection of i-data about: IO Statistics
data
* "IO Statistics" here refers to: read/write requests; read/write sizes;
io service time; cpu usage during service
* IO Statistics i-data is sampled over the collectionPeriodInSecs
* Does not inherit the value of the 'collectionPeriodInSecs' attribute
from the
'introspection:generator:resource_usage' stanza, and may be
enabled/disabled
independently of it.
761
* Default: 60 (1 minute)
[introspection:generator:kvstore]
* For 'introspection_generator_addon', packaged with Splunk.
* "KV Store" here refers to: statistics information about KV Store
process.
[commands:user_configurable]
prefix = <path>
* All non-internal commands started by splunkd are prefixed with this
string, allowing for "jailed" command execution.
* Should be only one word. In other words, commands are supported, but
commands and arguments are not.
* Applies to commands such as: search scripts, scripted inputs, SSL
certificate generation scripts. (Any commands that are
user-configurable).
* Does not apply to trusted/non-configurable command executions, such
as:
splunk search, splunk-optimize, gunzip.
* No default.
762
[shclustering]
disabled = <boolean>
* Disables or enables search head clustering on this instance.
* When enabled, the captain needs to be selected via a
bootstrap mechanism. Once bootstrapped, further captain
selections are made via a dynamic election mechanism.
* When enabled, you must also specify the cluster member's own server
address / management URI for identification purpose. This can be
done in 2 ways: by specifying the 'mgmt_uri' setting individually on
each member or by specfing pairs of 'GUID, mgmt-uri' strings in the
servers_list attribute.
* Default: true
mgmt_uri = [ mgmt-URI ]
* The management URI is used to identify the cluster member's own
address to
itself.
* Either 'mgmt_uri' or 'servers_list' is necessary.
* The 'mgmt_uri' setting is simpler to author but is unique for each
member.
* The 'servers_list' setting is more involved, but can be copied as a
config string to all members in the cluster.
adhoc_searchhead = <boolean>
* This setting configures a member as an adhoc search head; i.e., the
member
does not run any scheduled jobs.
* Use the setting 'captain_is_adhoc_searchhead' to reduce compute load
on the
captain.
* Default: false
no_artifact_replications = <boolean>
* Prevent this Search Head Cluster member to be selected as a target for
replications.
* This is an advanced setting, and not to be changed without proper
understanding of the implications.
* Default: false
captain_is_adhoc_searchhead = <boolean>
* This setting prohibits the captain from running scheduled jobs.
* The captain is dedicated to controlling the activities of the cluster,
but can also run adhoc search jobs from clients.
* Default: false
preferred_captain = <boolean>
* The cluster tries to assign captaincy to a member with
'preferred_captain=true'.
763
* Note that it is not always possible to assign captaincy to a member
with
preferred_captain=true - for example, if none of the preferred
members is
reachable over the network. In that case, captaincy might remain on a
member with preferred_captain=false.
* Default: true
prevent_out_of_sync_captain = <boolean>
* This setting prevents a node that could not sync config changes to
current
captain from becoming the cluster captain.
* This setting takes precedence over the preferred_captain setting. For
example,
if there are one or more preferred captain nodes but the nodes cannot
sync config
changes with the current captain, then the current captain retains
captaincy even
if it is not a preferred captain.
* This must be set to the same value on all members.
* Default: true
pass4SymmKey = <password>
* Secret shared among the members in the search head cluster to prevent
any
arbitrary instance from connecting to the cluster.
* All members must use the same value.
* If set in the [shclustering] stanza, it takes precedence over any
setting
in the [general] stanza.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
* Default: 'changeme' from the [general] stanza in the default the
server.conf file.
async_replicate_on_proxy = <boolean>
* If the jobs/${sid}/results REST endpoint had to be proxied to a
different
member due to missing local replica, this attribute automatically
schedules an async replication to that member when set to true.
* Default is true.
master_dump_service_periods = <integer>
* If SHPMaster info is switched on in log.cfg, then captain statistics
are dumped in splunkd.log after the specified number of service
periods.
764
Purely a debugging aid.
* Default: 500
long_running_jobs_poll_period = <integer>
* Long running delegated jobs are polled by the captain every
"long_running_jobs_poll_period" seconds to ascertain whether they are
still running, in order to account for potential node/member failure.
* Default: 600 (10 minutes)
scheduling_heuristic = <string>
* This setting configures the job distribution heuristic on the captain.
* There are currently two supported strategies: 'round_robin' or
'scheduler_load_based'.
* Default: 'scheduler_load_based'
id = <GUID>
* Unique identifier for this cluster as a whole, shared across all
cluster
members.
* By default, Splunk software arranges for a unique value to be
generated and
shared across all members.
cxn_timeout = <integer>
* Low-level timeout, in seconds, for establishing connection between
cluster members.
* Default: 60
send_timeout = <integer>
* Low-level timeout, in seconds, for sending data between search head
cluster members.
* Default: 60
rcv_timeout = <integer>
* Low-level timeout, in seconds, for receiving data between search head
cluster members.
* Default: 60
cxn_timeout_raft = <integer>
* Low-level timeout, in seconds, for establishing connection between
search
head cluster members for the raft protocol.
* Default: 2
send_timeout_raft = <integer>
* Low-level timeout, in seconds, for sending data between search head
cluster members for the raft protocol.
* Default: 5
rcv_timeout_raft = <integer>
* Low-level timeout, in seconds, for receiving data between search head
cluster members for the raft protocol.
765
* Default: 5
rep_cxn_timeout = <integer>
* Low-level timeout, in seconds, for establishing connection for
replicating
data.
* Default: 5
rep_send_timeout = <integer>
* Low-level timeout, in seconds, for sending replication slice data
between cluster members.
* This is a soft timeout. When this timeout is triggered on source peer,
it tries to determine if target is still alive. If it is still alive,
it reset the timeout for another rep_send_timeout interval and
continues.
If target has failed or cumulative timeout has exceeded
rep_max_send_timeout, replication fails.
* Default: 5
rep_rcv_timeout = <integer>
* Low-level timeout, in seconds, for receiving acknowledgement data from
members.
* This is a soft timeout. When this timeout is triggered on source
member,
it tries to determine if target is still alive. If it is still alive,
it reset the timeout for another rep_send_timeout interval and
continues.
If target has failed or cumulative timeout has exceeded
the 'rep_max_rcv_timeout' setting, replication fails.
* Default: 10
rep_max_send_timeout = <integer>
* Maximum send timeout, in seconds, for sending replication slice data
between cluster members.
* On 'rep_send_timeout' source peer determines if total send timeout
has
exceeded rep_max_send_timeout. If so, replication fails.
* If cumulative rep_send_timeout exceeds 'rep_max_send_timeout',
replication
fails.
* Default: 600 (10 minutes)
rep_max_rcv_timeout = <integer>
* Maximum cumulative receive timeout, in seconds, for receiving
acknowledgement
data from members.
* On 'rep_rcv_timeout' source member determines if total receive
timeout has
exceeded 'rep_max_rcv_timeout'. If so, replication fails.
* Default: 600 (10 minutes)
log_heartbeat_append_entries = <boolean>
766
* If true, Splunk software logs the the low-level heartbeats between
members in
splunkd_access.log file. These heartbeats are used to maintain the
authority
of the captain authority over other members.
* Default: false.
election_timeout_ms = <positive_integer>
* The amount of time, in milliseconds, that a member waits before
trying to become the captain.
* Note that modifying this value can alter the heartbeat period (See
election_timeout_2_hb_ratio for further details)
* A very low value of election_timeout_ms can lead to unnecessary
captain
elections.
* Default: 60000 (1 minute)
election_timeout_2_hb_ratio = <positive_integer>
* The ratio between the election timeout, set in election_timeout_ms,
and
the raft heartbeat period.
* Raft heartbeat period = election_timeout_ms /
election_timeout_2_hb_ratio
* A typical ratio between 5 - 20 is desirable. Default is 12 to keep the
raft heartbeat period at 5s, i.e election_timeout_ms(60000ms) / 12
* This ratio determines the number of heartbeat attempts that would fail
before a member starts to timeout and tries to become the captain.
access_logging_for_heartbeats = <boolean>
* Only valid on captain
* Enables/disables logging to the splunkd_access.log file for member
767
heartbeats
* NOTE: you do not have to restart captain to set this config
parameter.
Simply run the cli command on master:
% splunk edit shcluster-config -access_logging_for_heartbeats
<<boolean>>
* Default: false (logging disabled)
max_peer_rep_load = <integer>
* This is the maximum number of concurrent replications that a
member can take part in as a target.
* Default: 5
manual_detention = on|off
* This property toggles manual detention on member.
* When a node is in manual detention, it does not accept new search
jobs,
including both scheduled and ad-hoc searches. It also does not receive
replicated search artifacts from other nodes.
* Default: off
percent_peers_to_restart = <integer>
* The percentage of members to restart at one time during rolling
restarts.
* Actual percentage may vary due to lack of granularity for smaller peer
sets regardless of setting, a minimum of 1 peer is restarted per
round.
* Valid values are between 0 and 100.
* CAUTION: Do not set this attribute to a value greater than 20%.
768
Otherwise, issues can arise during the captain election process.
rolling_restart_with_captaincy_exchange = <boolean>
* If this boolean is turned on, captain tries to exchange captaincy
with another node during rolling restart.
* If set to false, captain restarts and captaincy transfers to some
other node.
* Default: true
rolling_restart = restart|searchable|searchable_force
* Determines the rolling restart mode for a search head cluster.
* If set to restart, a rolling restart runs in classic mode.
* If set to searchable, a rolling restart runs in searchable (minimal
search disruption) mode.
* If set to searchable_force, the search head cluster performs a
searchable rolling restart, but overrides the health check
* Note: You do not have to restart any search head members to set this
parameter.
Run this CLI command from any member:
% splunk edit shcluster-config -rolling_restart
restart|searchable|searchable_force
* Default: restart (runs in classic rolling-restart mode)
769
* Controls the frequency, in seconds, with which the member attempts
to send heartbeats to the captain.
* This heartbeat exchanges data between the captain and members, which
helps in maintaining the in-memory centralized state for all the
cluster members.
* Note that this heartbeat period is different from the Raft heartbeat
period in the election_timeout_2_hb_ratio setting.
* Default: 5
enableS2SHeartbeat = <boolean>
* Splunk software monitors each replication connection for presence of
a heartbeat.
* If the heartbeat is not seen for s2sHeartbeatTimeout seconds, it
closes
the connection.
* Default: true
s2sHeartbeatTimeout = <integer>
* This specifies the global timeout, in seconds, value for monitoring
heartbeats on replication connections.
* Splunk software closes a replication connection if a heartbeat is not
seen
for 's2sHeartbeatTimeout' seconds.
* Replication source sends a heartbeat every 30 seconds.
* Default: 600 (10 minutes)
captain_uri = [ static-captain-URI ]
* The management URI of static captain is used to identify the cluster
captain for a static captain.
election = <boolean>
* This is used to classify a cluster as static or dynamic (RAFT based).
* If set to false, a static captain, which is used for DR situation.
* If set to true, a dynamic captain election enabled through RAFT
protocol.
mode = <member>
* Accepted values are captain and member, mode is used to identify
the function of a node in static search head cluster. Setting mode
as captain assumes it to function as both captain and a member.
#proxying related
sid_proxying = <boolean>
* Enable or disable search artifact proxying.
* Changing this affects the proxying of search results, and jobs feed
is not cluster-aware.
* Only for internal/expert use.
* Default: true
ss_proxying = <boolean>
* Enable or disable saved search proxying to captain.
* Changing this affects the behavior of Searches and Reports page
770
in Splunk Web.
* Only for internal/expert use.
* Default: true
ra_proxying = <boolean>
* Enable or disable saved report acceleration summaries proxying to
captain.
* Changing this affects the behavior of report acceleration summaries
page.
* Only for internal/expert use.
* Default: true
alert_proxying = <boolean>
* Enable or disable alerts proxying to captain.
* Changing this impacts the behavior of alerts, and essentially make
them
not cluster-aware.
* Only for internal/expert use.
* Default: true
csv_journal_rows_per_hb = <integer>
* Controls how many rows of CSV from the delta-journal are sent per hb
* Used for both alerts and suppressions
* Do not alter this value without contacting Splunk Support.
* Default: 10000
conf_replication_period = <integer>
* Controls how often, in seconds, a cluster member replicates
configuration changes.
* A value of 0 disables automatic replication of configuration changes.
* Default: 5
conf_replication_max_pull_count = <integer>
* Controls the maximum number of configuration changes a member
replicates from the captain at one time.
* A value of 0 disables any size limits.
* Default: 1000
conf_replication_max_push_count = <integer>
* Controls the maximum number of configuration changes a member
replicates to the captain at one time.
* A value of 0 disables any size limits.
* Default: 100
conf_replication_max_json_value_size = [<integer>|<integer>[KB|MB|GB]]
* Controls the maximum size of a JSON string element at any nested
level while parsing a configuration change from JSON representation.
* If a knowledge object created on a member has some string element
that exceeds this limit, the knowledge object is not replicated
to the rest of the search head cluster, and a warning that mentions
conf_replication_max_json_value_size is written to splunkd.log.
* If you do not specify a unit for the value, the unit defaults to
771
bytes.
* The lower limit of this setting is 512KB.
* When increasing this setting beyond the default, you must take into
account the available system memory.
* Default: 15MB
conf_replication_include.<conf_file_name> = <boolean>
* Controls whether Splunk replicates changes to a particular type of
*.conf
file, along with any associated permissions in *.meta files.
* Default: false
conf_replication_summary.whitelist.<name> = <whitelist_pattern>
* Whitelist files to be included in configuration replication summaries.
conf_replication_summary.blacklist.<name> = <blacklist_pattern>
* Blacklist files to be excluded from configuration replication
summaries.
conf_replication_summary.concerning_file_size = <integer>
* Any individual file within a configuration replication summary that is
larger than this value (in MB) triggers a splunkd.log warning message.
* Default: 50
conf_replication_summary.period = <timespan>
* Controls how often configuration replication summaries are created.
* Default: 1m (1 minute)
conf_replication_purge.eligibile_count = <integer>
* Controls how many configuration changes must be present before any
become
eligible for purging.
* In other words: controls the minimum number of configuration changes
Splunk software remembers for replication purposes.
* Default: 20000
conf_replication_purge.eligibile_age = <timespan>
* Controls how old a configuration change must be before it is eligible
for
purging.
* Default: '1d' (1 day).
conf_replication_purge.period = <timespan>
* Controls how often configuration changes are purged.
* Default: 1h (1 hour)
conf_replication_find_baseline.use_bloomfilter_only = <boolean>
* Controls whether or not a search head cluster only uses bloom filters
to
determine a baseline, when it replicates configurations.
* Set to true to only use bloom filters in baseline determination during
configuration replication.
772
* Set to false to first attempt a standard method, where the search head
cluster captain interacts with members to determine the baseline,
before
falling back to using bloom filters.
* Default: false
conf_deploy_repository = <path>
* Full path to directory containing configurations to deploy to cluster
members.
conf_deploy_staging = <path>
* Full path to directory where preprocessed configurations may be
written
before being deployed cluster members.
conf_deploy_concerning_file_size = <integer>
* Any individual file within <conf_deploy_repository> that is larger
than
this value (in MB) triggers a splunkd.log warning message.
* Default: 50
conf_deploy_fetch_url = <URL>
* Specifies the location of the deployer from which members fetch the
configuration bundle.
* This value must be set to a <URL> in order for the configuration
bundle to
be fetched.
* No default.
conf_deploy_fetch_mode = auto|replace|none
* Controls configuration bundle fetching behavior when the member starts
up.
* When set to "replace", a member checks for a new configuration bundle
on
every startup.
* When set to "none", a member does not fetch the configuration bundle
on
startup.
* Regarding "auto":
* If no configuration bundle has yet been fetched, "auto" is
equivalent
to "replace".
* If the configuration bundle has already been fetched, "auto" is
equivalent to "none".
* Default: replace
773
encrypt_fields = <field> ...
* These are the fields that need to be re-encrypted when a Search Head
Cluster does its own first time run on syncing all members with a new
s
plunk.secret key
* Give a comma separated fields as a triple elements
<conf-file>:<stanza-prefix>:<key elem>
* For matching all stanzas from a conf, leave the stanza-prefix
empty. For example: "server: :pass4SymmKey" matches all stanzas
with pass4SymmKey as key in server.conf
* Default: storage/passwords, secret key for clustering/shclustering,
server ssl config
enable_jobs_data_lite = <boolean>
*This is for memory reduction on the captain for Search head clustering,
leads to lower memory in captain while slaves send the artifacts
status.csv as a string.
* Default: false
shcluster_label = <string>
* This specifies the label of the search head cluster.
retry_autosummarize_or_data_model_acceleration_jobs = <boolean>
* Controls whether the captain tries a second time to delegate an
auto-summarized or data model acceleration job, if the first attempt
to
delegate the job fails.
* Default: true
[replication_port://<port>]
############################################################################
# Configures the member to listen on a given TCP port for replicated
data
# from another cluster member.
# At least one replication_port must be configured and not disabled.
############################################################################
disabled = <boolean>
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
774
* Each rule can be in one of the following formats:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A Classless Inter-Domain Routing (CIDR) block of addresses
(examples: "10/8", "192.168.1/24", "fe80:1234/32")
3. A DNS name, possibly with a "*" used as a wildcard
(examples: "myhost.example.com", "*.splunk.com")
4. "*", which matches anything
* You can also prefix an entry with '!' to cause the rule to reject the
connection. The input applies rules in order, and uses the first one
that
matches.
For example, "!10.1/16, *" allows connections from everywhere except
the 10.1.*.* network.
* Default: "*" (accept from anywhere)
[replication_port-ssl://<port>]
* This configuration is the same as the replication_port stanza, but
uses SSL.
disabled = true|false
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
serverCert = <path>
* Full path to file containing private key and server certificate.
* The <path> must refer to a PEM format file.
* No default.
sslPassword = <password>
* Server certificate password, if any.
* No default.
password = <password>
* DEPRECATED; use 'sslPassword' instead.
* Used only if 'sslPassword' is not set.
rootCA = <path>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
* Used only if '[sslConfig]/sslRootCAPath' is not set.
* Full path to the root CA (Certificate Authority) certificate store.
* The <path> must refer to a PEM format file containing one or more
root CA
certificates concatenated together.
* No default.
775
cipherSuite = <cipher suite string>
* If set, uses the specified cipher string for the SSL connection.
* If not set, uses the default cipher string.
* provided by OpenSSL. This is used to ensure that the server does not
accept connections using weak encryption protocols.
supportSSLV3Only = <boolean>
* DEPRECATED. SSLv2 is now always disabled. The exact set of SSL
versions
allowed is now configurable via the "sslVersions" setting above.
useSSLCompression = <boolean>
* If true, enables SSL compression.
* Default: true
compressed = <boolean>
* DEPRECATED; use 'useSSLCompression' instead.
* Used only if 'useSSLCompression' is not set.
requireClientCert = <boolean>
* Requires that any peer that connects to replication port has a
certificate
that can be validated by certificate authority specified in rootCA.
* Default: false
allowSslRenegotiation = <boolean>
* In the SSL protocol, a client may request renegotiation of the
connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Default: true
KV Store configuration
[kvstore]
disabled = <boolean>
* Set to true to disable the KV Store process on the current server. To
completely disable KV Store in a deployment with search head
clustering or
search head pooling, you must also disable KV Store on each individual
server.
* Default: false
port = <port>
776
* Port to connect to the KV Store server.
* Default: 8191
replicaset = <replset>
* Replicaset name.
* Default: splunkrs
distributedLookupTimeout = <seconds>
* This setting has been removed, as it is no longer needed.
shutdownTimeout = <integer>
* Time, in seconds, to wait for a clean shutdown of the KV Store. If
this time
is reached after signaling for a shutdown, KV Store is forcibly
terminated
* Default: 100
initAttempts = <integer>
* The maximum number of attempts to initialize the KV Store when
starting
splunkd.
* Default: 300
replication_host = <host>
* The host name to access the KV Store.
* This setting has no effect on a single Splunk instance.
* When using search head clustering, if the "replication_host" value is
not
set in the [kvstore] stanza, the host you specify for
"mgmt_uri" in the [shclustering] stanza is used for KV
Store connection strings and replication.
* In search head pooling, this host value is a requirement for using KV
Store.
* This is the address on which a kvstore is available for accepting
remotely.
verbose = <boolean>
* Set to true to enable verbose logging.
* Default: false
dbPath = <path>
* Path where KV Store data is stored.
* Changing this directory after initial startup does not move existing
data.
The contents of the directory should be manually moved to the new
location.
* Default: $SPLUNK_DB/kvstore
777
oplogSize = <integer>
* The size of the replication operation log, in MB, for environments
with search head clustering or search head pooling.
In a standalone environment, 20% of this size is used.
* After the KV Store has created the oplog for the first time, changing
this
setting does NOT affect the size of the oplog. A full backup and
restart
of the KV Store is required.
* Do not change this setting without first consulting with Splunk
Support.
* Default: 1000MB (1GB)
replicationWriteTimeout = <integer>
* The time to wait, in seconds, for replication to complete while saving
KV store
operations. When the value is 0, the process never times out.
* Used for replication environments (search head clustering or search
head pooling).
* Default: 1800 (30 minutes)
caCertFile = <path>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
* Used only if 'sslRootCAPath' is not set.
* Full path to a CA (Certificate Authority) certificate(s) PEM format
file.
* If specified, it is used in KV Store SSL connections and
authentication.
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* Default: $SPLUNK_HOME/etc/auth/cacert.pem
caCertPath = <filepath>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
serverCert = <filepath>
* A certificate file signed by the signing authority specified above by
caCertPath.
* In search head clustering or search head pooling, the certificates at
different members must share the same ?subject'.
* The Distinguished Name (DN) found in the certificate?s subject, must
specify a non-empty value for at least one of the following
attributes:
Organization (O), the Organizational Unit (OU) or the
Domain Component (DC).
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
778
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
sslKeysPath = <filepath>
* DEPRECATED; use 'serverCert' instead.
* Used only when 'serverCert' is empty.
sslPassword = <password>
* Password of the private key in the file specified by 'serverCert'
above.
* Must be specified if FIPS is enabled (i.e. SPLUNK_FIPS=1), otherwise,
KV
Store is not available.
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* No default.
sslKeysPassword = <password>
* DEPRECATED; use 'sslPassword' instead.
* Used only when 'sslPassword' is empty.
sslCRLPath = <filepath>
* Certificate Revocation List file.
* Optional. Defaults to no Revocation List.
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
modificationsReadIntervalMillisec = <integer>
* Specifies how often, in milliseconds, to check for modifications to
KV Store collections in order to replicate changes for distributed
searches.
* Default: 1000 (1 second)
modificationsMaxReadSec = <integer>
* Maximum time interval KVStore can spend while checking for
modifications
before it produces collection dumps for distributed searches.
* Default: 30
779
[indexer_discovery]
pass4SymmKey = <password>
* Security key shared between master node and forwarders.
* If specified here, the same value must also be specified on all
forwarders
connecting to this master.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
polling_rate = <integer>
* A value between 1 to 10. This value affects the forwarder polling
frequency to achieve the desired polling rate. The number of connected
forwarders is also taken into consideration.
* The formula used to determine effective polling interval,
in Milliseconds, is:
(number_of_forwarders/polling_rate + 30 seconds) * 1000
* Default: 10
indexerWeightByDiskCapacity = <boolean>
* If set to true, it instructs the forwarders to use weighted load
balancing. In weighted load balancing, load balancing is based on the
total disk capacity of the target indexers, with the forwarder
streaming
more data to indexers with larger disks.
* The traffic sent to each indexer is based on the ratio of:
indexer_disk_capacity/total_disk_capacity_of_indexers_combined
* Default: false
[node_auth]
signatureVersion = <comma-separated list>
* A list of authentication protocol versions that nodes of a Splunk
deployment use to authenticate to other nodes.
* Each version of node authentication protocol implements an algorithm
that specifies cryptographic parameters to generate authentication
data.
* Nodes may only communicate using the same authentication protocol
version.
* For example, if you set "signatureVersion = v1,v2" on one node, that
node sends and accepts authentication data using versions "v1" and
"v2"
of the protocol, and you must also set "signatureVersion" to one of
"v1", "v2", or "v1,v2" on other nodes for those nodes to mutually
authenticate.
* For higher levels of security, set 'signatureVersion' to "v2".
* Default: v1,v2
780
Cache Manager Configuration
[cachemanager]
max_concurrent_downloads = <unsigned integer>
* The maximum number of buckets that can be downloaded simultaneously
from
external storage
* Default: 8
eviction_policy = <string>
* The name of the eviction policy to use.
* Current options: lru, clock, random, lrlt, noevict
* Do not change the value from the default unless instructed by
Splunk Support.
* Default: lru
enable_eviction_priorities = <boolean>
* When requesting buckets, search peers can give hints to the Cache
Manager
about the relative importance of buckets.
* When enabled, the Cache Manager takes the hints into consideration;
when
disabled, hints are ignored.
* Default: true
781
* The cache manager attempts to defer bucket eviction until the interval
between the bucket's latest time and the current time exceeds this
setting,
in seconds.
* This setting can be overridden on a per-index basis in indexes.conf.
* Default: 86400 (24 hours)
disabled = <boolean>
* Set to true to disable the raft statemachine.
* This feature require search head clustering to be enabled.
* Any consensus replication among search heads use this feature.
* Default: true
replicate_search_peers = <boolean>
* Add/remove search-server request is applied on all members
of a search head cluster, when this value to set to true.
* Require a healthy search head cluster with a captain.
[watchdog]
disabled = true|false
* Disables thread monitoring functionality.
* Any thread that has been blocked for more than 'responseTimeout'
milliseconds
is logged to $SPLUNK_HOME/var/log/watchdog/watchdog.log
* Defaults to false.
responseTimeout = <decimal>
* Maximum time, in seconds, that a thread can take to respond before the
watchdog logs a 'thread blocked' incident.
* The minimum value for 'responseTimeout' is 0.1.
* If you set 'responseTimeout' to lower than 0.1, the setting uses the
minimum
value instead.
* Defaults to 8 seconds.
actions = <actions_list>
* A comma-separated list of actions that execute sequentially when a
blocked
thread is encountered.
782
* Currently, the only available actions are 'pstacks', 'script' and
'bulletin'.
* 'pstacks' enables call stack generation for a blocked thread.
* Call stack generation gives the user immediate information on the
potential
bottleneck or deadlock.
* The watchdog saves each call stack in a separate file in
$SPLUNK_HOME/var/log/watchdog with the following file name format:
wd_stack_<pid>_<thread_name>_%Y_%m_%d_%H_%M_%S.%f_<uid>.log.
* 'script' executes specified script.
* 'bulletin' shows a message on the web interface.
* NOTE: This setting should be used only during troubleshooting, and if
you have
been asked to set it by a Splunk Support engineer. It might degrade
performance
by increasing CPU and disk usage.
* Defaults to empty list (no action executed).
actionsInterval = <decimal>
* The timeout, in seconds, that the watchdog uses while tracing a
blocked
thread. The watchdog executes each action every 'actionsInterval'
seconds.
* The minimum value for 'actionsInterval' is 0.01.
* If you set 'actionsInterval' to lower than 0.01, the setting uses the
minimum
value instead.
* NOTE: Very small timeout may have impact performance by increasing CPU
usage.
Splunk may be also slowed down by frequently executed action.
* Defaults to 0.7 second.
pstacksEndpoint = <boolean>
* Enables pstacks endpoint at /services/server/pstacks
* Endpoint allows ad-hoc pstacks generation of all running threads.
* NOTE: This setting is ignored if 'watchdog' is not enabled.
* NOTE: This setting should be used only during troubleshooting and
only if you
have been explicitly asked to set it by a Splunk Support engineer.
* Defaults to true.
[watchdog:timeouts]
reaperThread = <decimal>
* Maximum time, in seconds, that a reaper thread can take to respond
before the
watchdog logs a 'thread blocked' incident.
* The minimum value for 'reaperThread' is 0.1.
* If you set 'reaperThread' to lower than 0.1, the setting uses the
minimum
value instead.
* This value is used only for threads dedicated to clean up dispatch
directories
783
and search artifacts.
* Defaults to 30 seconds.
[watchdogaction:pstacks]
dumpAllThreads = <boolean>
* Determines whether or not the watchdog saves stacks of all monitored
threads
when it encounters a blocked thread.
* If you set 'dumpAllThreads' to true, the watchdog generates call
stacks for
all threads, regardless of thread state.
* NOTE: This setting is ignored if 'pstacks' is not enabled in the
'actions'
list.
* NOTE: This setting should be used only during troubleshooting, and if
you have
been asked to set it by a Splunk Support engineer. It may impact
performance
by increasing CPU and disk usage.
* Defaults to false.
784
stacks for
all threads.
* If the blocked thread starts responding again, the count of stacks
that the
watchdog has generated resets to zero.
* If another thread blockage occurs, the watchdog begins generating
stacks
again, up to 'maxStacksPerBlock' stacks.
* When set to 0, an unlimited number of stacks will be generated.
* NOTE: This setting is ignored if 'pstacks' is not enabled in the
'actions'
list.
* Defaults to 100.
[watchdogaction:script]
path = <string>
* The path to the script to execute when the watchdog triggers the
action.
* No default. If you do not set 'path', the watchdog ignores the action.
useShell = <boolean>
* If set to true, the script runs from the OS shell
("/bin/sh -c" on UNIX, "cmd.exe /c" on Windows)
* If set to false, the program will be run directly without attempting
to
expand shell metacharacters.
* Defaults to false.
forceStop = <boolean>
* Whether or not the watchdog forcefully stops an active watchdog action
script
when a blocked thread starts to respond.
* Use this setting when, for example, the watchdog script has internal
logic that
controls its lifetime and must run without interruption.
* Defaults to false.
forceStopOnShutdown = <boolean>
* If you set this setting to "true", the watchdog forcefully stops
active watchdog
scripts upon receipt of a shutdown request.
* Defaults to true.
[parallelreduce]
pass4SymmKey = <password>
* Security key shared between reducers and regular indexers.
* The same value must also be specified on all intermediaries.
785
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
@
@[rendezvous_service]
@
@uri = <uri>
@* Points to the tenant rendezvous service.
@* If empty or unspecified, disables rendezvous service heartbeats.
@* Currently, only HTTP is supported by the service.
@* Optional
@* Example <uri> :
<scheme>://<hostname>:<port>/<tenantId>/<rendezvous_path>
@
@refresh_interval = <positive integer>
@* Frequency, in seconds, at which the rendezvous service is updated.
@* Optional
@* Default: 30
@
@[bucket_catalog_service]
@
@uri = <uri>
@* Points to the tenant bucket catalog service.
@* Required.
@* Currently, only HTTP is supported by the service.
@* Example:
<scheme>://<hostname>:<port>/<tenantId>/<bucket_catalog_path>
@
@token = <token>
[search_artifact_remote_storage]
disabled = <boolean>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies whether or not search artifacts should be stored remotely.
* Splunkd does not clean up artifacts from remote storage. Set up
cleanup
786
separately with the remote storage provider.
* Default: true
S3 specific settings
remote.s3.header.<http-method-name>.<header-field-name> = <String>
* Optional.
* Enable server-specific features, such as reduced redundancy,
encryption,
and so on, by passing extra HTTP headers with the REST requests.
* The <http-method-name> can be any valid HTTP method. For example,
GET,
PUT, or ALL, for setting the header field for all HTTP methods.
* Example: remote.s3.header.PUT.x-amz-storage-class =
REDUCED_REDUNDANCY
remote.s3.access_key = <String>
* Optional.
* Specifies the access key to use when authenticating with the remote
storage
system supporting the S3 API.
* If not specified, the indexer looks for these environment variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order).
* If the environment variables are not set and the indexer is running on
EC2,
the indexer attempts to use the access key from the IAM role.
* No default.
remote.s3.secret_key = <String>
* Optional.
787
* Specifies the secret key to use when authenticating with the remote
storage
system supporting the S3 API.
* If not specified, the indexer looks for these environment variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order).
* If the environment variables are not set and the indexer is running on
EC2,
the indexer attempts to use the secret key from the IAM role.
* No default.
remote.s3.list_objects_version = v1|v2
* The AWS S3 Get Bucket (List Objects) Version to use.
* See AWS S3 documentation "GET Bucket (List Objects) Version 2" for
details.
* Default: v1
remote.s3.signature_version = v2|v4
* Optional.
* The signature version to use when authenticating with the remote
storage
system supporting the S3 API.
* For 'sse-kms' server-side encryption scheme, you must use
signature_version=v4.
* Default: v4
remote.s3.auth_region = <String>
* Optional
* The authentication region to use for signing requests when interacting
with the remote
storage system supporting the S3 API.
* Used with v4 signatures only.
* If unset and the endpoint (either automatically constructed or
explicitly set with
remote.s3.endpoint setting) uses an AWS URL (for example,
https://s3-us-west-1.amazonaws.com),
the instance attempts to extract the value from the endpoint URL (for
example, "us-west-1"). See the description for the
remote.s3.endpoint setting.
* If unset and an authentication region cannot be determined, the
request will be signed
with an empty region value.
* No default.
788
remote.s3.supports_versioning = true | false
* Optional.
* Specifies whether the remote storage supports versioning.
* Versioning is a means of keeping multiple variants of an object
in the same bucket on the remote storage.
* Default: true
remote.s3.endpoint = <URL>
* Optional.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL
connectivity
with the endpoint.
* If not specified and the indexer is running on EC2, the endpoint is
constructed automatically based on the EC2 region of the instance
where the
indexer is running, as follows: https://s3-<region>.amazonaws.com
* Example: https://s3-us-west-2.amazonaws.com
remote.s3.retry_policy = max_count
* Sets the retry policy to use for remote file operations.
* Optional.
* A retry policy specifies whether and how to retry file operations
that fail
for those failures that might be intermittent.
789
* Retry policies:
+ "max_count": Imposes a maximum number of times a file operation is
retried upon intermittent failure both for individual parts of a
multipart
download or upload and for files as a whole.
* Default: max_count
remote.s3.sslVerifyServerCert = <boolean>
* Optional.
* If this is set to true, Splunk verifies certificate presented by S3
server and checks that the common name/alternate name matches the
ones specified in 'remote.s3.sslCommonNameToCheck'
and 'remote.s3.sslAltNameToCheck'.
790
* Default: false
remote.s3.sslVersions = <versions_list>
* Optional.
* Comma-separated list of SSL versions to connect to
'remote.s3.endpoint'.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: tls1.2
remote.s3.sslRootCAPath = <path>
* Optional
* Full path to the Certificate Authority (CA) certificate PEM format
file
containing one or more certificates concatenated together. S3
certificate
is validated against the CAs present in this file.
* Default: [sslConfig/caCertFile] in the server.conf file
791
* The curves should be specified in the order of preference.
* The client sends these curves as a part of Client Hello.
* We only support named curves specified by their SHORT names.
(see struct ASN1_OBJECT in asn1.h)
* The list of valid named curves by their short/long names can be
obtained
by executing this command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* e.g. ecdhCurves = prime256v1,secp384r1,secp521r1
* Default: not set
remote.s3.dhFile = <path>
* Optional
* PEM format Diffie-Hellman parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Default: not set.
remote.s3.encryption.sse-c.key_type = kms
* Optional
* Determines the mechanism Splunk uses to generate the key for sending
over to S3 for SSE-C.
* The only valid value is 'kms', indicating AWS KMS service.
* One must specify required KMS settings: e.g. remote.s3.kms.key_id
for Splunk to start up while using SSE-C.
* Default: kms.
remote.s3.kms.key_id = <string>
* Required if remote.s3.encryption = sse-c | sse-kms
* Specifies the identifier for Customer Master Key (CMK) on KMS. It can
be the
792
unique key ID or the Amazon Resource Name (ARN) of the CMK or the
alias
name or ARN of an alias that refers to the CMK.
* Examples:
Unique key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
CMK ARN:
arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
* No default.
remote.s3.kms.access_key = <string>
* Optional.
* Similar to 'remote.s3.access_key'.
* If not specified, KMS access uses 'remote.s3.access_key'.
* No default.
remote.s3.kms.secret_key = <string>
* Optional.
* Similar to 'remote.s3.secret_key'.
* If not specified, KMS access uses 'remote.s3.secret_key'.
* No default.
remote.s3.kms.auth_region = <string>
* Required if 'remote.s3.auth_region' is not set and Splunk can not
automatically extract this information.
* Similar to 'remote.s3.auth_region'.
* If not specified, KMS access uses 'remote.s3.auth_region'.
* No default.
remote.s3.kms.<ssl_settings> = <...>
* Optional.
* Check the descriptions of the SSL settings for
remote.s3.<ssl_settings>
above. e.g. remote.s3.sslVerifyServerCert.
* Valid ssl_settings are sslVerifyServerCert, sslVersions,
sslRootCAPath, sslAltNameToCheck,
sslCommonNameToCheck, cipherSuite, ecdhCurves and dhFile.
* All of these are optional and fall back to same defaults as
the 'remote.s3.<ssl_settings>'.
793
server.conf.example
# Version 7.2.1
#
# This file contains an example server.conf. Use this file to configure
SSL
# and HTTP server options.
#
# To use one or more of these configurations, copy the configuration
block
# into server.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Turn on SSL:
[sslConfig]
enableSplunkdSSL = true
useClientSSLCompression = true
serverCert = $SPLUNK_HOME/etc/auth/server.pem
sslPassword = password
sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.pem
certCreateScript = genMyServerCert.sh
[proxyConfig]
http_proxy = http://proxy:80
https_proxy = http://proxy:80
no_proxy = localhost, 127.0.0.1, ::1
794
trustedIP = 127.0.0.1
############################################################################
# Set this node to be a cluster master.
############################################################################
[clustering]
mode = master
replication_factor = 3
pass4SymmKey = someSecret
search_factor = 2
############################################################################
# Set this node to be a slave to cluster master "SplunkMaster01" on
port
# 8089.
############################################################################
[clustering]
mode = slave
master_uri = https://SplunkMaster01.example.com:8089
pass4SymmKey = someSecret
############################################################################
# Set this node to be a searchhead to cluster master "SplunkMaster01" on
# port 8089.
############################################################################
[clustering]
mode = searchhead
master_uri = https://SplunkMaster01.example.com:8089
pass4SymmKey = someSecret
############################################################################
# Set this node to be a searchhead to multiple cluster masters -
# "SplunkMaster01" with pass4SymmKey set to 'someSecret and
"SplunkMaster02"
# with no pass4SymmKey set here.
############################################################################
[clustering]
mode = searchhead
master_uri = clustermaster:east, clustermaster:west
[clustermaster:east]
master_uri=https://SplunkMaster01.example.com:8089
pass4SymmKey=someSecret
[clustermaster:west]
master_uri=https://SplunkMaster02.example.com:8089
795
############################################################################
# Open an additional non-SSL HTTP REST port, bound to the localhost
# interface (and therefore not accessible from outside the machine)
Local
# REST clients like the CLI can use this to avoid SSL overhead when not
# sending data across the network.
############################################################################
[httpServerListener:127.0.0.1:8090]
ssl = false
serverclass.conf
The following are the spec and example files for serverclass.conf.
serverclass.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values for defining server
# classes to which deployment clients can belong. These attributes and
# values specify what content a given server class member will receive
from
# the deployment server.
#
# For examples, see serverclass.conf.example. You must reload
deployment
# server ("splunk reload deploy-server"), or restart splunkd, for
changes to
# this file to take effect.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#***************************************************************************
# Configure the server classes that are used by a deployment server
instance.
#
# Server classes are essentially categories. They use filters to
control
# what clients they apply to, contain a set of applications, and may
796
define
# deployment server behavior for the management of those applications.
The
# filters can be based on DNS name, IP address, build number of client
# machines, platform, and the so-called clientName. If a target machine
# matches the filter, then the apps and configuration content that make
up
# the server class will be deployed to it.
# Property Inheritance
#
# Stanzas in serverclass.conf go from general to more specific, in the
# following order:
# [global] -> [serverClass:<name>] ->
[serverClass:<scname>:app:<appname>]
#
# Some properties defined at a general level (say [global]) can be
# overridden by a more specific stanza as it applies to them. All
# overridable properties are marked as such.
disabled = true|false
* Toggles deployment server component off and on.
* Set to true to disable.
* Defaults to false.
crossServerChecksum = true|false
* Ensures that each app will have the same checksum across different
deployment
servers.
* Useful if you have multiple deployment servers behind a load-balancer.
* Defaults to false.
excludeFromUpdate = <path>[,<path>]...
* Specifies paths to one or more top-level files or directories (and
their
contents) to exclude from being touched during app update. Note that
each comma-separated entry MUST be prefixed by "$app_root$/"
(otherwise a
warning will be generated).
* Can be overridden at the serverClass level.
* Can be overridden at the app level.
797
* Requires version 6.2.x or higher for both the Deployment Server and
Client.
repositoryLocation = <path>
* The repository of applications on the server machine.
* Can be overridden at the serverClass level.
* Defaults to $SPLUNK_HOME/etc/deployment-apps
targetRepositoryLocation = <path>
* The location on the deployment client where to install the apps
defined
for this Deployment Server.
* If this value is unset, or set to empty, the repositoryLocation path
is used.
* Useful only with complex (for example, tiered) deployment strategies.
* Defaults to $SPLUNK_HOME/etc/apps, the live
configuration directory for a Splunk instance.
tmpFolder = <path>
* Working folder used by deployment server.
* Defaults to $SPLUNK_HOME/var/run/tmp
798
filterType = whitelist | blacklist
* The whitelist setting indicates a filtering strategy that pulls in a
subset:
* Items are not considered to match the stanza by default.
* Items that match any whitelist entry, and do not match any
blacklist
entry are considered to match the stanza.
* Items that match any blacklist entry are not considered to match
the
stanza, regardless of whitelist.
* The blacklist setting indicates a filtering strategy that rules out a
subset:
* Items are considered to match the stanza by default.
* Items that match any blacklist entry, and do not match any
whitelist
entry are considered to not match the stanza.
* Items that match any whitelist entry are considered to match the
stanza.
* More briefly:
* whitelist: default no-match -> whitelists enable -> blacklists
disable
* blacklist: default match -> blacklists disable-> whitelists enable
* Can be overridden at the serverClass level, and the serverClass:app
level.
* Defaults to whitelist
799
for
easier entry:
* You can specify simply '.' to mean '\.'
* You can specify simply '*' to mean '.*'
* Matches are always case-insensitive; you do not need to specify the
'(?i)' prefix.
whitelist.from_pathname = <pathname>
blacklist.from_pathname = <pathname>
* As as alternative to a series of (whitelist|blacklist).<n>, the
<clientName>,
<IP address>, and <hostname> list can be imported from <pathname> that
is
800
either a plain text file or a comman-separated values (CSV) file.
* May be used in conjunction with (whitelist|blacklist).select_field,
(whitelist|blacklist).where_field, and
(whitelist|blacklist).where_equals.
* If used by itself, then <pathname> specifies a plain text file where
one
<clientName>, <IP address>, or <hostname> is given per line.
* If used on conjuction with select_field, where_field, and
where_equals, then
<pathname> specifies a CSV file.
* The <pathname> is relative to $SPLUNK_HOME.
* May also be used in conjunction with (whitelist|blacklist).<n> to
specify
additional values, but there is no direct relation between them.
* At most one from_pathname may be given per stanza.
801
(whitelist|blacklist).where_equals.
* At most one where_field may be given per stanza.
802
level.
* Defaults to false
803
* allowSslCompression = false
* useHTTPServerCompression = true
*
* Deployment Client / server.conf
* useHTTPClientCompression = true
*
* This option is inherited and available upto the serverclass level (not
app). Apps belonging to server classes that required precompression
will
be compressed, even if they belong to a server class which does not
require precompression
* Defaults to true
[serverClass:<serverClassName>]
* This stanza defines a server class. A server class is a collection of
applications; an application may belong to multiple server classes.
* serverClassName is a unique name that is assigned to this server
class.
* A server class can override all inheritable properties in the [global]
stanza.
* A server class name may only contain: letters, numbers, space,
underscore,
dash, dot, tilde, and the '@' symbol. It is case-sensitive.
# NOTE:
# The keys listed below are all described in detail in the
# [global] section above. They can be used with serverClass stanza to
# override the global setting
continueMatching = true | false
endpoint = <URL template string>
excludeFromUpdate = <path>[,<path>]...
filterType = whitelist | blacklist
whitelist.<n> = <clientName> | <IP address> | <hostname>
blacklist.<n> = <clientName> | <IP address> | <hostname>
machineTypesFilter = <comma-separated list>
restartSplunkWeb = true | false
restartSplunkd = true | false
issueReload = true | false
restartIfNeeded = true | false
stateOnClient = enabled | disabled | noop
repositoryLocation = <path>
804
THIRD LEVEL: app ###########
appFile=<file name>
* In cases where the app name is different from the file or directory
name,
you can use this parameter to specify the file name. Supported
formats
are: directories, .tar files, and .tgz files.
serverclass.conf.example
# Version 7.2.1
#
# Example 1
# Matches all clients and includes all apps in the server class
[global]
whitelist.0=*
# whitelist matches all clients.
[serverClass:AllApps]
[serverClass:AllApps:app:*]
# a server class that encapsulates all apps in the repositoryLocation
805
# Example 2
# Assign server classes based on dns names.
[global]
[serverClass:AppsForOps]
whitelist.0=*.ops.yourcompany.com
[serverClass:AppsForOps:app:unix]
[serverClass:AppsForOps:app:SplunkLightForwarder]
[serverClass:AppsForDesktops]
filterType=blacklist
# blacklist everybody except the Windows desktop machines.
blacklist.0=*
whitelist.0=*.desktops.yourcompany.com
[serverClass:AppsForDesktops:app:SplunkDesktop]
# Example 3
# Deploy server class based on machine types
[global]
[serverClass:AppsByMachineType]
# Ensure this server class is matched by all clients. It is IMPORTANT
to
# have a general filter here, and a more specific filter at the app
level.
# An app is matched _only_ if the server class it is contained in was
# successfully matched!
whitelist.0=*
[serverClass:AppsByMachineType:app:SplunkDesktop]
# Deploy this app only to Windows boxes.
machineTypesFilter=windows-*
[serverClass:AppsByMachineType:app:unix]
# Deploy this app only to unix boxes - 32/64 bit.
machineTypesFilter=linux-i686, linux-x86_64
# Example 4
# Specify app update exclusion list.
[global]
# The local/ subdirectory within every app will not be touched upon
update.
excludeFromUpdate=$app_root$/local
[serverClass:MyApps]
806
[serverClass:MyApps:app:SpecialCaseApp]
# For the SpecialCaseApp, both the local/ and lookups/ subdirectories
will
# not be touched upon update.
excludeFromUpdate=$app_root$/local,$app_root$/lookups
# Example 5
# Control client reloads/restarts
[global]
restartSplunkd=false
restartSplunkWeb=true
# Example 6a
# Use (whitelist|blacklist) text file import.
[serverClass:MyApps]
whitelist.from_pathname = etc/system/local/clients.txt
# Example 6b
# Use (whitelist|blacklist) CSV file import to read all values from the
Client
# field (ignoring all other fields).
[serverClass:MyApps]
whitelist.select_field = Client
whitelist.from_pathname = etc/system/local/clients.csv
# Example 6c
# Use (whitelist|blacklist) CSV file import to read some values from
the Client
# field (ignoring all other fields) where ServerType is one of T1, T2,
or
# starts with dc.
[serverClass:MyApps]
whitelist.select_field = Client
whitelist.from_pathname = etc/system/local/server_list.csv
807
whitelist.where_field = ServerType
whitelist.where_equals = T1, T2, dc*
# Example 6d
# Use (whitelist|blacklist) CSV file import to read some values from
field 2
# (ignoring all other fields) where field 1 is one of T1, T2, or starts
with
# dc.
[serverClass:MyApps]
whitelist.select_field = 2
whitelist.from_pathname = etc/system/local/server_list.csv
whitelist.where_field = 1
whitelist.where_equals = T1, T2, dc*
serverclass.seed.xml.conf
The following are the spec and example files for serverclass.seed.xml.conf.
serverclass.seed.xml.conf.spec
# Version 7.2.1
<!--
# This configuration is used by deploymentClient to seed a Splunk
installation with applications, at startup time.
# This file should be located in the workingDir folder defined by
deploymentclient.conf.
#
# An interesting fact - the DS -> DC communication on the wire also uses
this XML format.
-->
<?xml version="1.0"?>
<deployment name="somename">
<!--
# The endpoint from which all apps can be downloaded. This value
can be overridden by serviceClass or ap declarations below.
# In addition, deploymentclient.conf can control how this property
is used by deploymentClient - see deploymentclient.conf.spec.
-->
<endpoint>$deploymentServerUri$/services/streams/deployment?name=$serviceClassName$:
<!--
# The location on the deploymentClient where all applications will
be installed. This value can be overridden by serviceClass or
808
# app declarations below.
# In addition, deploymentclient.conf can control how this property
is used by deploymentClient - see deploymentclient.conf.spec.
-->
<repositoryLocation>$SPLUNK_HOME/etc/apps</repositoryLocation>
<serviceClass name="serviceClassName">
<!--
# The order in which this service class is processed.
-->
<order>N</order>
<!--
# DeploymentClients can also override these values using
serverRepositoryLocationPolicy and serverEndpointPolicy.
-->
<repositoryLocation>$SPLUNK_HOME/etc/myapps</repositoryLocation>
<endpoint>splunk.com/spacecake/$serviceClassName$/$appName$.tgz</endpoint>
<!--
# Please See serverclass.conf.spec for how these properties are
used.
-->
<continueMatching>true</continueMatching>
<restartSplunkWeb>false</restartSplunkWeb>
<restartSplunkd>false</restartSplunkd>
<stateOnClient>enabled</stateOnClient>
<app name="appName1">
<!--
# Applications can override the endpoint property.
-->
<endpoint>splunk.com/spacecake/$appName$</endpoint>
</app>
<app name="appName2"/>
</serviceClass>
</deployment>
serverclass.seed.xml.conf.example
809
</app>
<app name="app_1">
<repositoryLocation>$SPLUNK_HOME/etc/myapps</repositoryLocation>
<!-- Download app_1 from the given location -->
<endpoint>splunk.com/spacecake/apps/app_1.tgz</endpoint>
</app>
</serverClass>
<serverClass name="foobar_apps">
<!-- construct url for each location based on the scheme below and
download each app -->
<endpoint>foobar.com:5556/services/streams/deployment?name=$serverClassName$_$appNam
<app name="app_0"/>
<app name="app_1"/>
<app name="app_2"/>
</serverClass>
<serverClass name="local_apps">
<endpoint>foo</endpoint>
<app name="app_0">
<!-- app present in local filesystem -->
<endpoint>file:/home/johndoe/splunk/ds/service_class_2_app_0.bundle</endpoint>
</app>
<app name="app_1">
<!-- app present in local filesystem -->
<endpoint>file:/home/johndoe/splunk/ds/service_class_2_app_1.bundle</endpoint>
</app>
<app name="app_2">
<!-- app present in local filesystem -->
<endpoint>file:/home/johndoe/splunk/ds/service_class_2_app_2.bundle</endpoint>
</app>
</serverClass>
</deployment>
setup.xml.conf
The following are the spec and example files for setup.xml.conf.
setup.xml.conf.spec
# Version 7.2.1
#
#
<!--
This file describes the setup XML config and provides some examples.
810
setup.xml provides a Setup Screen that you provide to users to specify
configurations
for an app. The Setup Screen is available when the user first runs the
app or from the
Splunk Manager: Splunk > Manager > Apps > Actions > Set up
$SPLUNK_HOME/etc/apps/<app>/default/setup.xml
endpoint=saved/searches
entity=MySavedSearch
field=cron_schedule
(1) blocks provide an iteration concept when the referenced REST entity
is a regex
(4) blocks can be used to create a new entry rather than edit an
already existing one, set the
entity name to "_new". NOTE: make sure to add the required field
'name' as
an input.
811
of entities/object the block/input addresses. Generally, an
endpoint maps to a
Splunk configuration file.
812
Nodes within an <input> element can display the name of the entity and
field values within the entity
on the setup screen. Specify $name$ to display the name of the entity.
Use $<field_name>$ to specify
the value of a specified field.
-->
<setup>
<block title="Basic stuff" endpoint="saved/searches/"
entity="foobar">
<text> some description here </text>
<input field="is_scheduled">
<label>Enable Schedule for $name$</label> <!-- this will be
rendered as "Enable Schedule for foobar" -->
<type>bool</type>
</input>
<input field="cron_scheduled">
<label>Cron Schedule</label>
<type>text</type>
</input>
<input field="actions">
<label>Select Active Actions</label>
<type>list</type>
</input>
813
<type>text</type>
</input>
<input target="search">
<label>Search</label>
<type>text</type>
</input>
</block>
814
</text>
<input entity="%24WINDIR%5CWindowsUpdate.log" field="enabled">
<label>Enable $name$</label>
<type>bool</type>
</input>
</block>
</setup>
setup.xml.conf.example
No example
source-classifier.conf
The following are the spec and example files for source-classifier.conf.
source-classifier.conf.spec
# Version 7.2.1
#
# This file contains all possible options for configuring settings for
the
# file classifier in source-classifier.conf.
#
# There is a source-classifier.conf in $SPLUNK_HOME/etc/system/default/
To
# set custom configurations, place a source-classifier.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# source-classifier.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
815
* To prevent sourcetype "bundles/learned/*-model.xml" files from
containing
sensitive terms (e.g. "bobslaptop") that occur very frequently in your
data files, add those terms to ignored_model_keywords.
source-classifier.conf.example
# Version 7.2.1
#
# This file contains an example source-classifier.conf. Use this file
to
# configure classification
# of sources into sourcetypes.
#
# To use one or more of these configurations, copy the configuration
block
# into source-classifier.conf in $SPLUNK_HOME/etc/system/local/. You
must
# restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
816
sourcetypes.conf
The following are the spec and example files for sourcetypes.conf.
sourcetypes.conf.spec
# Version 7.2.1
#
# NOTE: sourcetypes.conf is a machine-generated file that stores the
document
# models used by the file classifier for creating source types.
GLOBAL SETTINGS
817
# stanza, the value in the specific stanza takes precedence.
_sourcetype = <value>
* Specifies the sourcetype for the model.
* Change this to change the model's sourcetype.
* Future sources that match the model will receive a sourcetype of this
new
name.
_source = <value>
* Specifies the source (filename) for the model.
sourcetypes.conf.example
# Version 7.2.1
#
# This file contains an example sourcetypes.conf. Use this file to
configure
# sourcetype models.
#
# NOTE: sourcetypes.conf is a machine-generated file that stores the
document
# models used by the file classifier for creating source types.
#
# Generally, you should not edit sourcetypes.conf, as most attributes
are
# machine generated. However, there are two attributes which you can
change.
#
# To use one or more of these configurations, copy the configuration
block into
# sourcetypes.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# This is an example of a machine-generated sourcetype models for a
fictitious
# sourcetype cadcamlog.
#
818
[/Users/bob/logs/bnf.x5_Thu_Dec_13_15:59:06_2007_171714722]
_source = /Users/bob/logs/bnf.x5
_sourcetype = cadcamlog
L----------- = 0.096899
L-t<_EQ> = 0.016473
splunk-launch.conf
The following are the spec and example files for splunk-launch.conf.
splunk-launch.conf.spec
# Version 7.2.1
# Note: this conf file is different from most splunk conf files. There
is
# only one in the whole system, located at
# $SPLUNK_HOME/etc/splunk-launch.conf; further, there are no stanzas,
# explicit or implicit. Finally, any splunk-launch.conf files in
# etc/apps/... or etc/users/... will be ignored.
#*******
# Environment variables
#
# Primarily, this file simply sets environment variables to be used by
# Splunk programs.
#
# These environment variables are the same type of system environment
# variables that can be set, on unix, using:
# bourne shells:
# $ export ENV_VAR=value
# c-shells:
# % setenv ENV_VAR value
#
# or at a windows command prompt:
# C:\> SET ENV_VAR=value
#*******
819
<environment_variable>=<value>
#*******
# Specific Splunk environment settings
#
# These settings are primarily treated as environment variables, though
some
# have some additional logic (defaulting).
#
# There is no need to explicitly set any of these values in typical
# environments.
#*******
SPLUNK_HOME=<pathname>
* The comment in the auto-generated splunk-launch.conf is
informational, not
a live setting, and does not need to be uncommented.
* Fully qualified path to the Splunk install directory.
* If unset, Splunk automatically determines the location of SPLUNK_HOME
based on the location of the splunk CLI executable.
* Specifically, the parent of the directory containing splunk or
splunk.exe
* Must be set if Common Criteria mode is enabled.
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* Defaults to unset.
SPLUNK_DB=<pathname>
* The comment in the auto-generated splunk-launch.conf is
informational, not
a live setting, and does not need to be uncommented.
* Fully qualified path to the directory containing the splunk index
directories.
* Primarily used by paths expressed in indexes.conf
* The comment in the autogenerated splunk-launch.conf is informational,
not
a live setting, and does not need to be uncommented.
* If unset, becomes $SPLUNK_HOME/var/lib/splunk (unix) or
%SPLUNK_HOME%\var\lib\splunk (windows)
* Defaults to unset.
SPLUNK_BINDIP=<ip address>
820
* Specifies an interface that splunkd and splunkweb should bind to, as
opposed to binding to the default for the local operating system.
* If unset, Splunk makes no specific request to the operating system
when
binding to ports/opening a listening socket. This means it
effectively
binds to '*'; i.e. an unspecified bind. The exact result of this is
controlled by operating system behavior and configuration.
* NOTE: When using this setting you must update mgmtHostPort in
web.conf to
match, or the command line and splunkweb will not know how to
reach splunkd.
* For splunkd, this sets both the management port and the receiving
ports
(from forwarders).
* Useful for a host with multiple IP addresses, either to enable
access or restrict access; though firewalling is typically a superior
method of restriction.
* Overrides the Splunkweb-specific
web.conf/[settings]/server.socket_host
param; the latter is preferred when SplunkWeb behavior is the focus.
* Defaults to unset.
SPLUNK_IGNORE_SELINUX=true
* If unset (not present), Splunk on Linux will abort startup if it
detects
it is running in an SELinux environment. This is because in
shipping/distribution-provided SELinux environments, Splunk will not
be
permitted to work, and Splunk will not be able to identify clearly
why.
* This setting is useful in environments where you have configured
SELinux
to enable Splunk to work.
* If set to any value, Splunk will launch, despite the presence of
SELinux.
* Defaults to unset.
821
#*******
# Service/server names.
#
# These settings are considered internal, and altering them is not
# supported.
#
# Under Windows, they influence the expected name of the service;
# on UNIX they influence the reported name of the appropriate
# server or daemon process.
#
# On Linux distributions that run systemd, this is the name of the
# unit file for the service that Splunk Enterprise runs as.
# For example, if you set 'SPLUNK_SERVER_NAME' to 'splunk'
# then the corresponding unit file should be named 'splunk.service'.
#
# If you want to run multiple instances of Splunk as *services* under
# Windows, you will need to change the names below for 2nd, 3rd, ...,
# instances. That is because the 1st instance has taken up service
names
# 'Splunkd' and 'Splunkweb', and you may not have multiple services with
# same name.
#*******
SPLUNK_SERVER_NAME=<name>
* Names the splunkd server/service.
* Defaults to splunkd (UNIX), or Splunkd (Windows).
SPLUNK_WEB_NAME=<name>
* Names the Python app server / web server/service.
* Defaults to splunkweb (UNIX), or Splunkweb (Windows).
#*******
# File system check enable/disable
#
# CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!!
CAUTION !!!
# USE OF THIS ADVANCED SETTING IS NOT SUPPORTED. IRREVOCABLE DATA LOSS
# CAN OCCUR. YOU USE THE SETTING SOLELY AT YOUR OWN RISK.
# CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!!
CAUTION !!!
#
# When Splunk software encounters a file system that it does not
recognize,
# it runs a utility called 'locktest' to confirm that it can write to
the
# file system correctly. If 'locktest' fails for any reason, splunkd
# cannot start.
#
# The following setting lets you temporarily bypass the 'locktest'
# check (for example, when a software vendor introduces a new default
# file system on a popular operating system.) When it is active, splunkd
822
# starts regardless of its ability to interact with the file system.
#
# Use this setting if and only if:
#
# * You are a skilled Splunk administrator and know what you are doing.
# * You use Splunk software in a development environment.
# * You want to recover from a situation where the default
# filesystem has been changed outside of your control (such as
# during an operating system upgrade.)
# * You want to recover from a situation where a Splunk bug
# has invalidated a previously functional file system after an
upgrade.
# * You want to evaluate the performance of a file system for which
# Splunk has not yet offered support.
# * You have been given explicit instruction from Splunk Support to use
# the setting to solve a problem where Splunk software does not start
# because of a failed file system check.
# * You understand and accept all of the risks of using the setting,
# up to and including LOSING ALL YOUR DATA WITH NO CHANCE OF RECOVERY
* while the setting is active.
#
# If none of these scenarios applies to you, then DO NOT USE THE
SETTING.
#
# REPEAT:
# USE OF THIS ADVANCED SETTING IS NOT SUPPORTED. IRREVOCABLE DATA LOSS
# CAN OCCUR. YOU USE THIS SETTING SOLELY AT YOUR OWN RISK. BY USING THE
# SETTING, YOU ARE ACTIVELY BYPASSING FILE SYSTEM CHECKS THAT ARE
# DESIGNED TO CONFIRM THAT SPLUNK SOFTWARE CAN WORK ON YOUR MACHINE
# FILE SYSTEM. DO NOT USE THE SETTING AS A LONG-TERM SOLUTION TO A FILE
# SYSTEM PROBLEM. WHEN USING THE SETTING UNDER GUIDANCE OF SPLUNK
# SUPPORT, REPORT ANY PROBLEMS YOU ENCOUNTER WITH INDEXING OR
# SEARCH IMMEDIATELY.
#
#*******
OPTIMISTIC_ABOUT_FILE_LOCKING = [0|1]
* Whether or not Splunk software skips the file system lock check on
unrecognized file systems.
* CAUTION: USE THIS SETTING AT YOUR OWN RISK. YOU CAN LOSE ANY DATA
THAT HAS BEEN INDEXED AS LONG AS THE SETTING IS ACTIVE.
* When set to 1, Splunk software skips the file system check, and
splunkd starts whether or not it can recognize the file system.
* Defaults to 0 (Run the file system check.)
splunk-launch.conf.example
No example
823
tags.conf
The following are the spec and example files for tags.conf.
tags.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for configuring
tags. Set
# any number of tags for indexed or extracted fields.
#
# There is no tags.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a tags.conf in $SPLUNK_HOME/etc/system/local/.
For
# help, see tags.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<fieldname>=<value>]
* The field name and value to which the tags in the stanza
apply ( eg host=localhost ).
* A tags.conf file can contain multiple stanzas. It is recommended that
the
value be URL encoded to avoid
* config file parsing errors especially if the field value contains the
following characters: \n, =, []
* Each stanza can refer to only one field=value
<tag1> = <enabled|disabled>
<tag2> = <enabled|disabled>
<tag3> = <enabled|disabled>
* Set whether each <tag> for this specific <fieldname><value> is
enabled or
disabled.
* While you can have multiple tags in a stanza (meaning that multiple
tags are
assigned to the same field/value combination), only one tag is allowed
per
stanza line. In other words, you can't have a list of tags on one line
824
of the
stanza.
tags.conf.example
# Version 7.2.1
#
# This is an example tags.conf. Use this file to define tags for
fields.
#
# To use one or more of these configurations, copy the configuration
block into
# tags.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# This first example presents a situation where the field is "host" and
the
# three hostnames for which tags are being defined are "hostswitch,"
# "emailbox," and "devmachine." Each hostname has two tags applied to
it, one
# per line. Note also that the "building1" tag has been applied to two
hostname
# values (emailbox and devmachine).
[host=hostswitch]
pci = enabled
cardholder-dest = enabled
[host=emailbox]
email = enabled
building1 = enabled
[host=devmachine]
development = enabled
building1 = enabled
[src_ip=192.168.1.1]
firewall = enabled
[seekPtr=1cb58000]
825
EOF = enabled
NOT_EOF = disabled
telemetry.conf
The following are the spec and example files for telemetry.conf.
telemetry.conf.spec
GLOBAL SETTINGS
826
[general]
optInVersion = <number>
* An integer that identifies the set of telemetry data to be collected
* Incremented upon installation if the data set collected by Splunk has
changed
* This field was introduced for version 2 of the telemetry data set.
So,
when this field is missing, version 1 is assumed.
* Should not be changed manually
optInVersionAcknowledged = <number>
* The latest optInVersion acknowledged by a user on this deployment
* While this value is less than the current optInVersion, a prompt for
data collection opt-in will be shown to users with the
edit_telemetry_settings capability at login
* Once a user confirms interaction with this login - regardless of
opt-in choice - this number will be set to the value of optInVersion
* This gets set regardless of whether the user opts in using the opt-in
dialog or the Settings > Instrumentation page
* If manually decreased or deleted, then a user that previously
acknowledged
the opt-in dialog will not be shown the dialog the next time they log
in
unless the related settings (dismissedInstrumentationOptInVersion and
hideInstrumentationOptInModal) in their user-prefs.conf are also
changed.
* Unset by default
sendLicenseUsage = true|false
* Send the licensing usage information of splunk/app to the app owner
* Defaults to false
sendAnonymizedUsage = true|false
* Send the anonymized usage information about various categories like
infrastructure, utilization etc of splunk/app to Splunk, Inc
* Defaults to false
sendSupportUsage = true|false
* Send the support usage information about various categories like
infrastructure, utilization etc of splunk/app to Splunk, Inc
* Defaults to false
sendAnonymizedWebAnalytics = true|false
* Send the anonymized usage information about user interaction with
splunk performed through the web UI
* Defaults to false
precheckSendLicenseUsage = true|false
* Default value for sending license usage in opt in modal
827
* Defaults to true
precheckSendAnonymizedUsage = true|false
* Default value for sending anonymized usage in opt in modal
* Defaults to false
precheckSendSupportUsage = true|false
* Default value for sending support usage in opt in modal
* Defaults to false
showOptInModal = true|false
* DEPRECATED - see optInVersion and optInVersionAcknowledged settings
* Shows the opt in modal. DO NOT SET! When a user opts in, it will
automatically be set to false to not show the modal again.
* Defaults to true
deploymentID = <string>
* A uuid used to correlate telemetry data for a single splunk
deployment over time. The value is generated the first time
a user opts in to sharing telemetry data.
deprecatedConfig = true|false
* Setting to determine whether the splunk deployment is following
best practices for the platform as well as the app
* Defaults to false
retryTransaction = <string>
* Setting that is created if the telemetry conf updates cannot be
delivered to
the cluster master for the splunk_instrumentation app.
* Defaults to an empty string
swaEndpoint = <string>
* The URL to which swajs will forward UI analytics events
* If blank, swajs sends events to the Splunk MINT CDS endpoint.
* Blank by default
telemetrySalt = <string>
* A salt used to hash certain fields before transmission
* Autogenerated as a random UUID when splunk starts
scheduledHour = <number>
* Time of day, on a 24 hour clock, that the scripted input responsible
for collecting telemetry data starts.
* The script begins at the top of the hour and completes, including
running searches on the primary instance in your deployment, after a few
minutes.
* Defaults to 3
scheduledDay = <string>
* Number representing the weekday on which telemetry data collection is
executed
828
* 0 represents Monday
* Defaults to every day (*)
reportStartDate = <string>
* Start date for the next telemetry data collection
* Uses format YYYY-MM-DD
* Defaults to empty string
telemetry.conf.example
[general]
sendLicenseUsage = false
sendAnonymizedUsage = false
sendAnonymizedWebAnalytics = false
precheckSendAnonymizedUsage = false
precheckSendLicenseUsage = true
showOptInModal = true
deprecatedConfig = false
scheduledHour = 16
reportStartDate = 2017-10-27
scheduledDay = 4
times.conf
The following are the spec and example files for times.conf.
829
times.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for creating custom
time
# ranges.
#
# To set custom configurations, place a times.conf in
# $SPLUNK_HOME/etc/system/local/. For help, see times.conf.example.
You
# must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<timerange_name>]
* The token to be used when accessing time ranges via the API or
command
line
* A times.conf file can contain multiple stanzas.
label = <string>
* The textual description used by the UI to reference this time range
* Required
830
header_label = <string>
* The textual description used by the UI when displaying search results
in
this time range.
* Optional. If omitted, the <timerange_name> is used instead.
earliest_time = <string>
* The string that represents the time of the earliest event to return,
inclusive.
* The time can be expressed with a relative time identifier or in epoch
time.
* Optional. If omitted, no earliest time bound is used.
latest_time = <string>
* The string that represents the time of the earliest event to return,
inclusive.
* The time can be expressed with a relative time identifier or in epoch
time.
* Optional. If omitted, no latest time bound is used. NOTE: events
that
occur in the future (relative to the server timezone) may be
returned.
order = <integer>
* The key on which all custom time ranges are sorted, ascending.
* The default time range selector in the UI will merge and sort all time
ranges according to the 'order' key, and then alphabetically.
* Optional. Default value is 0.
disabled = <integer>
* Determines if the menu item is shown. Set to 1 to hide menu item.
* Optional. Default value is 0
is_sub_menu = <boolean>
* REMOVED. This setting is no longer used.
[settings]
* List of flags that modify the panels that are displayed in the time
range picker.
show_advanced = [true|false]
* Determines if the 'Advanced' panel should be displayed in the time
range picker
* Optional. Default value is true
831
show_date_range = [true|false]
* Determines if the 'Date Range' panel should be displayed in the time
range picker
* Optional. Default value is true
show_datetime_range = [true|false]
* Determines if the 'Date & Time Range' panel should be displayed in the
time range picker
* Optional. Default value is true
show_presets = [true|false]
* Determines if the 'Presets' panel should be displayed in the time
range picker
* Optional. Default value is true
show_realtime = [true|false]
* Determines if the 'Realtime' panel should be displayed in the time
range picker
* Optional. Default value is true
show_relative = [true|false]
* Determines if the 'Relative' panel should be displayed in the time
range picker
* Optional. Default value is true
times.conf.example
# Version 7.2.1
#
# This is an example times.conf. Use this file to create custom time
ranges
# that can be used while interacting with the search system.
#
# To use one or more of these configurations, copy the configuration
block
# into times.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own
customizations.
832
# The stanza name is an alphanumeric string (no spaces) that uniquely
# identifies a time range.
[this_business_week]
# Define the ordering sequence of this time range. All time ranges are
# sorted numerically, ascending. If the time range is in a sub menu and
not
# in the main menu, this will determine the position within the sub
menu.
order = 110
# Use epoch time notation to define the time bounds for the Fall
Semester
# 2013, where earliest_time is 9/4/13 00:00:00 and latest_time is
12/13/13
# 00:00:00.
#
[Fall_2013]
label = Fall Semester 2013
earliest_time = 1378278000
latest_time = 1386921600
# two time ranges that should appear in a sub menu instead of in the
main
# menu. the order values here determine relative ordering within the
# submenu.
#
[yesterday]
label = Yesterday
earliest_time = -1d@d
latest_time = @d
833
order = 10
sub_menu = Other options
[day_before_yesterday]
label = Day before yesterday
header_label = from the day before yesterday
earliest_time = -2d@d
latest_time = -1d@d
order = 20
sub_menu = Other options
#
# The sub menu item that should contain the previous two time ranges.
The
# order key here determines the submenu opener's placement within the
main
# menu.
#
[other]
label = Other options
order = 202
#
# Disable the realtime panel in the time range picker
[settings]
show_realtime = false
transactiontypes.conf
The following are the spec and example files for transactiontypes.conf.
transactiontypes.conf.spec
# Version 7.2.1
#
# This file contains all possible attributes and value pairs for a
# transactiontypes.conf file. Use this file to configure transaction
searches
# and their properties.
#
# There is a transactiontypes.conf in $SPLUNK_HOME/etc/system/default/.
To set
# custom configurations, place a transactiontypes.conf in
# $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
# configurations.
834
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<TRANSACTIONTYPE>]
* Create any number of transaction types, each represented by a stanza
name and
any number of the following attribute/value pairs.
* Use the stanza name, [<TRANSACTIONTYPE>], to search for the
transaction in
Splunk Web.
* If you do not specify an entry for each of the following attributes,
Splunk
uses the default value.
835
maxevents = <integer>
* The maximum number of events in a transaction. This constraint is
disabled if
the value is a negative integer.
* Defaults to: maxevents=1000
connected=[true|false]
* Relevant only if fields (see above) is not empty. Controls whether an
event
that is not inconsistent and not consistent with the fields of a
transaction
opens a new transaction (connected=true) or is added to the
transaction.
* An event can be not inconsistent and not field-consistent if it
contains
fields required by the transaction but none of these fields has been
instantiated in the transaction (by a previous event addition).
* Defaults to: connected=true
startswith=<transam-filter-string>
* A search or eval filtering expression which, if satisfied by an
event, marks
the beginning of a new transaction.
* For example:
* startswith="login"
* startswith=(username=foobar)
* startswith=eval(speed_field < max_speed_field)
* startswith=eval(speed_field < max_speed_field/12)
* Defaults to: ""
endswith=<transam-filter-string>
* A search or eval filtering expression which, if satisfied by an
event, marks
the end of a transaction.
* For example:
* endswith="logout"
* endswith=(username=foobar)
* endswith=eval(speed_field > max_speed_field)
* endswith=eval(speed_field > max_speed_field/12)
* Defaults to: ""
836
eval(<eval-expression>)
* Where:
* <search-expression> is a valid search expression that does
not contain quotes
* <quoted-search-expression> is a valid search expression that
contains quotes
* <eval-expression> is a valid eval expression that
evaluates to a boolean. For example,
startswith=eval(foo<bar*2) will match events where foo is less than
2 x bar.
* Examples:
* "<search expression>": startswith="foo bar"
* <quoted-search-expression>: startswith=(name="mildred")
* <quoted-search-expression>: startswith=("search literal")
* eval(<eval-expression>): startswith=eval(distance/time <
max_speed)
maxopentxn=<int>
* Specifies the maximum number of not yet closed transactions to keep in
the
open pool. When this limit is surpassed, Splunk begins evicting
transactions
using LRU (least-recently-used memory cache algorithm) policy.
* The default value of this attribute is read from the transactions
stanza in
limits.conf.
maxopenevents=<int>
* Specifies the maximum number of events (can be) part of open
transactions.
When this limit is surpassed, Splunk begins evicting transactions
using LRU
(least-recently-used memory cache algorithm) policy.
* The default value of this attribute is read from the transactions
stanza in
limits.conf.
keepevicted=<bool>
* Whether to output evicted transactions. Evicted transactions can be
distinguished from non-evicted transactions by checking the value of
the
'evicted' field, which is set to '1' for evicted transactions.
* Defaults to: keepevicted=false
mvlist=<bool>|<field-list>
* Field controlling whether the multivalued fields of the transaction
are (1) a
list of the original events ordered in arrival order or (2) a set of
837
unique
field values ordered lexigraphically. If a comma/space delimited list
of
fields is provided only those fields are rendered as lists
* Defaults to: mvlist=f
delim=<string>
* A string used to delimit the original event values in the transaction
event
fields.
* Defaults to: delim=" "
nullstr=<string>
* The string value to use when rendering missing field values as part of
mv
fields in a transaction.
* This option applies only to fields that are rendered as lists.
* Defaults to: nullstr=NULL
search=<string>
* A search string used to more efficiently seed transactions of this
type.
* The value should be as specific as possible, to limit the number of
events
that must be retrieved to find transactions.
* Example: sourcetype="sendmaill_sendmail"
* Defaults to "*" (all events)
transactiontypes.conf.example
# Version 7.2.1
#
# This is an example transactiontypes.conf. Use this file as a
template to
# configure transactions types.
#
# To use one or more of these configurations, copy the configuration
block into
# transactiontypes.conf in $SPLUNK_HOME/etc/system/local/.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[default]
838
maxspan = 5m
maxpause = 2s
match = closest
[purchase]
maxspan = 10m
maxpause = 5m
fields = userid
transforms.conf
The following are the spec and example files for transforms.conf.
transforms.conf.spec
# Version 7.2.1
#
# This file contains settings and values that you can use to configure
# data transformations.
#
# Transforms.conf is commonly used for:
# * Configuring host and source type overrides that are based on regular
# expressions.
# * Anonymizing certain types of sensitive incoming data, such as credit
# card or social security numbers.
# * Routing specific events to a particular index, when you have
multiple
# indexes.
# * Creating new index-time field extractions. NOTE: We do not recommend
# adding to the set of fields that are extracted at index time unless
it
# is absolutely necessary because there are negative performance
# implications.
# * Creating advanced search-time field extractions that involve one or
more
# of the following:
# * Reuse of the same field-extracting regular expression across
multiple
# sources, source types, or hosts.
# * Application of more than one regular expression to the same
source,
# source type, or host.
# * Using a regular expression to extract one or more values from the
values
# of another field.
839
# * Delimiter-based field extractions, such as extractions where the
# field-value pairs are separated by commas, colons, semicolons,
bars, or
# something similar.
# * Extraction of multiple values for the same field.
# * Extraction of fields with names that begin with numbers or
# underscores.
# * NOTE: Less complex search-time field extractions can be set up
# entirely in props.conf.
# * Setting up lookup tables that look up fields from external sources.
#
# All of the above actions require corresponding settings in
props.conf.
#
# You can find more information on these topics by searching the Splunk
# documentation (http://docs.splunk.com/Documentation).
#
# There is a transforms.conf file in $SPLUNK_HOME/etc/system/default/.
To
# set custom configurations, place a transforms.conf file in
# $SPLUNK_HOME/etc/system/local/.
#
# For examples of transforms.conf configurations, see the
# transforms.conf.example file.
#
# You can enable configuration changes made to transforms.conf by
running this
# search in Splunk Web:
#
# | extract reload=t
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
840
# file wins.
# * If a setting is defined at both the global level and in a specific
# stanza, the value in the specific stanza takes precedence.
[<unique_transform_stanza_name>]
* Name your stanza. Use this name when you configure field extractions,
lookup tables, and event routing in props.conf. For example, if you
are
setting up an advanced search-time field extraction, in props.conf you
would add REPORT-<class> = <unique_transform_stanza_name> under the
[<spec>] stanza that corresponds with a stanza you've created in
transforms.conf.
* Follow this stanza name with any number of the following
setting/value
pairs, as appropriate for what you intend to do with the transform.
* If you do not specify an entry for each setting, Splunk software uses
the default value.
841
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field
extraction. However, FORMAT behaves differently depending on whether
the
extraction is performed at index time or search time.
* This setting specifies the format of the event, including any field
names or
values you want to add.
* FORMAT for index-time extractions:
* Use $n (for example $1, $2, etc) to specify the output of each REGEX
match.
* If REGEX does not have n groups, the matching fails.
* The special identifier $0 represents what was in the DEST_KEY before
the
REGEX was performed.
* At index time only, you can use FORMAT to create concatenated
fields:
* Example: FORMAT = ipaddress::$1.$2.$3.$4
* When you create concatenated fields with FORMAT, "$" is the only
special
character. It is treated as a prefix for regular expression
capturing
groups only if it is followed by a number and only if the
number applies to
an existing capturing group. So if REGEX has only one capturing
group and
its value is "bar", then:
* "FORMAT = foo$1" yields "foobar"
* "FORMAT = foo$bar" yields "foo$bar"
* "FORMAT = foo$1234" yields "foo$1234"
* "FORMAT = foo$1\$2" yields "foobar\$2"
* At index-time, FORMAT defaults to <stanza-name>::$1
* FORMAT for search-time extractions:
* The format of this field as used during search time extractions is
as
follows:
* FORMAT = <field-name>::<field-value>(
<field-name>::<field-value>)*
where:
* field-name = [<string>|$<extracting-group-number>]
* field-value = [<string>|$<extracting-group-number>]
* Search-time extraction examples:
* 1. FORMAT = first::$1 second::$2 third::other-value
* 2. FORMAT = $1::$2
* If you configure FORMAT with a variable <field-name>, such as in the
second
example above, the regular expression is repeatedly applied to the
source
key to match and extract all field/value pairs in the event.
* When you use FORMAT to set both the field and the value (such as
FORMAT =
842
third::other-value), and the value is not an indexed token, you
must set the
field to INDEXED_VALUE = false in fields.conf. Not doing so can
cause
inconsistent search results.
* NOTE: You cannot create concatenated fields with FORMAT at search
time.
That functionality is only available at index time.
* At search-time, FORMAT defaults to an empty string.
MATCH_LIMIT = <integer>
* Only set in transforms.conf for REPORT and TRANSFORMS field
extractions.
For EXTRACT type field extractions, set this in props.conf.
* Optional. Limits the amount of resources that are spent by PCRE
when running patterns that do not match.
* Use this to set an upper bound on how many times PCRE calls an
internal
function, match(). If set too low, PCRE may fail to correctly match a
pattern.
* Default: 100000
DEPTH_LIMIT = <integer>
* Only set in transforms.conf for REPORT and TRANSFORMS field
extractions.
For EXTRACT type field extractions, set this in props.conf.
* Optional. Limits the amount of resources that are spent by PCRE
when running patterns that do not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
function, match(). If set too low, PCRE might fail to correctly match
a
pattern.
* Default: 1000
CLONE_SOURCETYPE = <string>
* This name is wrong; a transform with this setting actually clones and
modifies events, and assigns the new events the specified source type.
* If CLONE_SOURCETYPE is used as part of a transform, the transform
creates a
modified duplicate event for all events that the transform is applied
to via
normal props.conf rules.
* Use this setting when you need to store both the original and a
modified
form of the data in your system, or when you need to to send the
original and
a modified form to different outbound systems.
* A typical example would be to retain sensitive information according
to
one policy and a version with the sensitive information removed
according to another policy. For example, some events may have data
that you must retain for 30 days (such as personally identifying
843
information) and only 30 days with restricted access, but you need
that
event retained without the sensitive data for a longer time with
wider
access.
* Specifically, for each event handled by this transform, a near-exact
copy
is made of the original event, and the transformation is applied to
the
copy. The original event continues along normal data processing
unchanged.
* The <string> used for CLONE_SOURCETYPE selects the source type that is
used
for the duplicated events.
* The new source type MUST differ from the the original source type. If
the
original source type is the same as the target of the
CLONE_SOURCETYPE,
Splunk software makes a best effort to log warnings to splunkd.log,
but this
setting is silently ignored at runtime for such cases, causing the
transform
to be applied to the original event without cloning.
* The duplicated events receive index-time transformations & sed
commands for all transforms that match its new host, source, or source
type.
* This means that props.conf matching on host or source will
incorrectly be
applied a second time.
* Can only be used as part of of an otherwise-valid index-time
transform. For
example REGEX is required, there must be a valid target (DEST_KEY or
WRITE_META), etc as above.
LOOKAHEAD = <integer>
* NOTE: This option is valid for all index time transforms, such as
index-time field creation, or DEST_KEY modifications.
* Optional. Specifies how many characters to search into an event.
* Default: 4096
* You may want to increase this value if you have event line lengths
that
exceed 4096 characters (before linebreaking).
WRITE_META = [true|false]
* NOTE: This setting is only valid for index-time field extractions.
* Automatically writes REGEX to metadata.
* Required for all index-time field extractions except for those where
DEST_KEY = _meta (see the description of the DEST_KEY setting, below)
* Use instead of DEST_KEY = _meta.
* Default: false
DEST_KEY = <KEY>
844
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where Splunk software stores the expanded FORMAT results in
DEFAULT_VALUE = <string>
* NOTE: This setting is only valid for index-time field extractions.
* Optional. The Splunk software writes the DEFAULT_VALUE to DEST_KEY if
the
REGEX fails.
* Default: empty string
SOURCE_KEY = <string>
* NOTE: This setting is valid for both index-time and search-time field
extractions.
* Optional. Defines the KEY that Splunk software applies the REGEX to.
* For search time extractions, you can use this setting to extract one
or
more values from the values of another field. You can use any field
that
is available at the time of the execution of this field extraction
* For index-time extractions use the KEYs described at the bottom of
this
file.
* KEYs are case-sensitive, and should be used exactly as they appear
in
the KEYs list at the bottom of this file. (For example, you would
say
SOURCE_KEY = MetaData:Host, *not* SOURCE_KEY = metadata:host .)
* If <string> starts with "field:" or "fields:" the meaning is changed.
Instead of looking up a KEY, it instead looks up an already indexed
field.
For example, if a CSV field name "price" was indexed then
"SOURCE_KEY = field:price" causes the REGEX to match against the
contents
of that field. It's also possible to list multiple fields here with
"SOURCE_KEY = fields:name1,name2,name3" which causes MATCH to be run
845
against a string comprising of all three values, separated by space
characters.
* SOURCE_KEY is typically used in conjunction with REPEAT_MATCH in
index-time field transforms.
* Default: _raw
* This means it is applied to the raw, unprocessed text of all events.
REPEAT_MATCH = [true|false]
* NOTE: This setting is only valid for index-time field extractions.
* Optional. When set to true, Splunk software runs the REGEX multiple
times on the SOURCE_KEY.
* REPEAT_MATCH starts wherever the last match stopped, and continues
until
no more matches are found. Useful for situations where an unknown
number
of REGEX matches are expected per event.
* Default: false
846
"normal")'
* When writing to a _meta field, the default behavior is to add a new
index-time field even if one exists with the same name, the same way
WRITE_META works for regular-expression-based extractions. For
example, "a=5,
a=a+2" adds two index-time fields to _meta: "a::5 a::7". You can
change this
by using ":=" after the variable name. For example, setting "a=5,
a:=a+2"
causes Splunk software to add a single "a::7" field.
* NOTE: Replacing index-time fields is slower than adding them. It is
best to
only use ":=" when you need this behavior.
* The ":=" operator can also be used to remove existing fields in _meta
by assigning the expression null() to them.
* When reading from an index-time field that occurs multiple times
inside the
_meta key, normally the first value is used. You can override this by
prefixing the name with "mv:" which returns all of the values into a
"multival" object. For example, if _meta contains the keys "v::a v::b"
then
'mvjoin(v,",")' returns "a" while 'mvjoin($mv:v$,",")' returns "a,b".
* Note that this "mv:" prefix does not change behavior when it writes
to a
_meta field. If the value returned by an expression is a multivalue,
it
always creates multiple index-time fields. For example,
'x=mvappend("a","b","c")' causes the string "x::a x::b x::c" to be
appended
to the _meta key.
* Internally, the _meta key can hold values with various numeric types.
Splunk software normally picks a type appropriate for the value that
the
expression returned. However, you can override this this choice by
specifying
a type in square brackets after the destination field name. For
example,
'my_len[int]=length(source)' creates a new field named "my_len" and
forces it
to be stored as a 64-bit integer inside _meta. You can force Splunk
software
to store a number as floating point by using the type "[float]". You
can
request a smaller, less-precise encoding by using "[float32]". If you
want to
store the value as floating point but also ensure that the Splunk
software
remembers the significant-figures information that the evaluation
expression
deduced, use "[float-sf]" or "[float32-sf]". Finally, you can force
the
result to be treated as a string by specifying "[string]".
847
* The capability of the search-time |eval operator to name the
destination
field based on the value of another field (like "| eval
{destname}=1")
is NOT available for index-time evaluations.
* Default: empty
848
* Used in conjunction with DELIMS when you are performing
delimiter-based
field extraction and only have field values to extract.
* FIELDS enables you to provide field names for the extracted field
values,
in list format according to the order in which the values are
extracted.
* NOTE: If field names contain spaces or commas they must be quoted
with " "
To escape, use \.
* The following example is a delimiter-based field extraction where
three
field values appear in an event. They are separated by a comma and
then a
space.
[commalist]
DELIMS = ", "
FIELDS = field1, field2, field3
* Default: ""
MV_ADD = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls what the extractor does when it finds a field which
already exists.
* If set to true, the extractor makes the field a multivalued field and
appends the newly found value, otherwise the newly found value is
discarded.
* Default: false
CLEAN_KEYS = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls whether Splunk software "cleans" the keys (field
names) it
extracts at search time. "Key cleaning" is the practice of replacing
any
non-alphanumeric characters (characters other than those falling
between the
a-z, A-Z, or 0-9 ranges) in field names with underscores, as well as
the
stripping of leading underscores and 0-9 characters from field names.
* Add CLEAN_KEYS = false to your transform if you need to extract field
names that include non-alphanumeric characters, or which begin with
underscores or 0-9 characters.
* Default: true
KEEP_EMPTY_VALS = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls whether Splunk software keeps field/value pairs
when
the value is an empty string.
* This option does not apply to field/value pairs that are generated by
Splunk software autokv extraction. Autokv ignores field/value pairs
849
with
empty values.
* Default: false
CAN_OPTIMIZE = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls whether Splunk software can optimize this
extraction out
(another way of saying the extraction is disabled).
* You might use this if you are running searches under a Search Mode
setting
that disables field discovery--it ensures that Software always
discovers
specific fields.
* Splunk software only disables an extraction if it can determine that
none of
the fields identified by the extraction will ever be needed for the
successful
evaluation of a search.
* NOTE: This option should be rarely set to false.
* Default: true
Lookup tables
filename = <string>
* Name of static lookup file.
* File should be in $SPLUNK_HOME/etc/system/lookups/, or in
$SPLUNK_HOME/etc/<app_name>/lookups/ if the lookup belongs to a
specific app.
* If file is in multiple 'lookups' directories, no layering is done.
* Standard conf file precedence is used to disambiguate.
* Only file names are supported. Paths are explicitly not supported. If
you
specify a path, Splunk software strips the path to use the value after
the final path separator.
* Splunk software then looks for this filename in
$SPLUNK_HOME/etc/system/lookups/ or
$SPLUNK_HOME/etc/<app_name>/lookups/.
* Default: empty string
collection = <string>
* Name of the collection to use for this lookup.
* Collection should be defined in
$SPLUNK_HOME/etc/<app_name>/collections.conf
for some <app_name>
* If collection is in multiple collections.conf file, no layering is
done.
850
* Standard conf file precedence is used to disambiguate.
* Defaults to empty string (in which case the name of the stanza is
used).
max_matches = <integer>
* The maximum number of possible matches for each input lookup value
(range 1 - 1000).
* If the lookup is non-temporal (not time-bounded, meaning the
time_field
setting is not specified), Splunk software uses the first <integer>
entries,
in file order.
* If the lookup is temporal, Splunk software uses the first <integer>
entries
in descending time order. In other words, only <max_matches> lookup
entries
are allowed to match. If the number of lookup entries exceeds
<max_matches>,
only the ones nearest to the lookup value are used.
* Default = 1000 if the lookup is not temporal, default = 1 if it is
temporal.
min_matches = <integer>
* Minimum number of possible matches for each input lookup value.
* Default = 0 for both temporal and non-temporal lookups, which means
that
Splunk software outputs nothing if it cannot find any matches.
* However, if min_matches > 0, and Splunk software gets less than
min_matches,
it provides the default_match value provided (see below).
default_match = <string>
* If min_matches > 0 and Splunk software has less than min_matches for
any
given input, it provides this default_match value one or more times
until the
min_matches threshold is reached.
* Defaults to empty string.
case_sensitive_match = <bool>
* NOTE: To disable case-sensitive matching with input fields and values
from
events, the KV Store lookup data must be entirely in lower case. The
input
data can be of any case, but the KV Store data must be lower case.
* If set to false, case insensitive matching is performed for all fields
in a
lookup table
* Defaults to true (case sensitive matching)
match_type = <string>
* A comma and space-delimited list of <match_type>(<field_name>)
851
specification to allow for non-exact matching
* The available match_type values are WILDCARD, CIDR, and EXACT. Only
fields
that should use WILDCARD or CIDR matching should be specified in this
list.
* Default: EXACT
external_cmd = <string>
* Provides the command and arguments to invoke to perform a lookup. Use
this
for external (or "scripted") lookups, where you interface with with an
external script rather than a lookup table.
* This string is parsed like a shell command.
* The first argument is expected to be a python script (or executable
file)
located in $SPLUNK_HOME/etc/<app_name>/bin (or ../etc/searchscripts).
* Presence of this field indicates that the lookup is external and
command
based.
* Default: empty string
fields_list = <string>
* A comma- and space-delimited list of all fields that are supported by
the
external command.
index_fields_list = <string>
* A comma- and space-delimited list of fields that need to be indexed
for a static .csv lookup file.
* The other fields are not indexed and not searchable.
* Restricting the fields enables better lookup performance.
* Defaults to all fields that are defined in the .csv lookup file
header.
external_type = [python|executable|kvstore|geo]
* This setting describes the external lookup type.
* Use 'python' for external lookups that use a python script.
* Use 'executable' for external lookups that use a binary executable,
such as a
C++ executable.
* Use 'kvstore' for KV store lookups.
* Use 'geo' for geospatial lookups.
* Default: python
time_field = <string>
* Used for temporal (time bounded) lookups. Specifies the name of the
field
in the lookup table that represents the timestamp.
* Default: empty string
* This means that lookups are not temporal by default.
time_format = <string>
852
* For temporal lookups this specifies the 'strptime' format of the
timestamp
field.
* You can include subseconds but Splunk software ignores them.
* Default: %s.%Q (seconds from unix epoch in UTC and optional
milliseconds)
max_offset_secs = <integer>
* For temporal lookups, this is the maximum time (in seconds) that the
event
timestamp can be later than the lookup entry time for a match to
occur.
* Default: 2000000000
min_offset_secs = <integer>
* For temporal lookups, this is the minimum time (in seconds) that the
event
timestamp can be later than the lookup entry timestamp for a match to
occur.
* Default: 0
batch_index_query = <bool>
* For large file-based lookups, batch_index_query determines whether
queries
can be grouped to improve search performance.
* Default is unspecified here, but defaults to true (at global level in
limits.conf)
allow_caching = <bool>
* Allow output from lookup scripts to be cached
* Default: true
cache_size = <integer>
* Cache size to be used for a particular lookup. If a previously looked
up
value is already present in the cache, it is applied.
* The cache size represents the number of input values for which to
cache
output values from a lookup table.
* Do not change this value unless you are advised to do so by Splunk
Support or
a similar authority.
* Default: 10000
max_ext_batch = <integer>
* The maximum size of external batch (range 1 - 1000).
* This setting applies only to KV Store lookup configurations.
* Default: 300
filter = <string>
* Filter results from the lookup table before returning data. Create
this filter
853
like you would a typical search query using Boolean expressions
and/or
comparison operators.
* For KV Store lookups, filtering is done when data is initially
retrieved to
improve performance.
* For CSV lookups, filtering is done in memory.
feature_id_element = <string>
* If the lookup file is a kmz file, this field can be used to specify
the xml
path from placemark down to the name of this placemark.
* This setting applies only to geospatial lookup configurations.
* Default: /Placemark/name
check_permission = <bool>
* Specifies whether the system can verify that a user has write
permission to a
lookup file when that user uses the outputlookup command to modify
that file.
If the user does not have write permissions, the system prevents the
modification.
* The check_permission setting is only respected when
output_check_permission
is set to "true" in limits.conf.
* You can set lookup table file permissions in the .meta file for each
lookup
file, or through the Lookup Table Files page in Settings. By default,
only
users who have the admin or power role can write to a shared CSV
lookup file.
* This setting applies only to CSV lookup configurations.
* Default: false
replicate = true|false
* Indicates whether to replicate CSV lookups to indexers.
* When false, the CSV lookup is replicated only to search heads in a
search
head cluster so that input lookup commands can use this lookup on the
search
heads.
* When true, the CSV lookup is replicated to both indexers and search
heads.
* Only for CSV lookup files.
* Note that replicate=true works only if it is included in replication
whitelist, See distSearch.conf/[replicationWhitelist] option.
* Default: true
854
METRICS - STATSD DIMENSION EXTRACTION
Metrics
[statsd-dims:<unique_transforms_stanza_name>]
* 'statsd-dims' prefix indicates this stanza is applicable only to
statsd metric
type input data.
* This stanza is used to define regular expression to match and extract
dimensions out of statsd dotted name segments.
* By default, only the unmatched segments of the statsd dotted name
segment
become the metric_name.
REMOVE_DIMS_FROM_METRIC_NAME = <boolean>
* If set to false, the matched dimension values from the REGEX above
would also
be a part of the metric name.
* If true, the matched dimension values would not be a part of metric
name.
* Default: true
[metric-schema:<unique_transforms_stanza_name>]
* The 'metric-schema' stanza transforms index-time field extractions
from a
single log event into metrics.
* Each metric created has its own metric_name and _value.
* The other fields extracted from the log event become dimensions in
the
generated metrics.
* You must provide one of the following two settings:
METRIC-SCHEMA-MEASURES-<unique_metric_name_prefix> or
METRIC-SCHEMA-MEASURES. These
settings determine how values for the metric_name and _value fields
are obtained.
METRIC-SCHEMA-MEASURES-<unique_metric_name_prefix> = <measure_field1>,
<measure_field2>,...
* Optional.
855
* <unique_metric_name_prefix> should match the value of a field
extracted from
the event.
* <measure_field> should match the name of a field with a numeric value
extracted from the event.
* If the value of the 'metric_name' index-time extraction matches with
the
<unique_metric_name_prefix>, the Splunk platform:
* Creates a metric with a new metric_name for each <measure_field>
where the
metric_name value is the <measure_field> prefixed by the
<unique_metric_name_prefix>.
* Saves the corresponding numeric value for each <measure_field> as
'_value'
within each metric.
* The Splunk platform saves the remaining index-time field extractions
as
dimensions in each of the created metrics.
* Default: empty
METRIC-SCHEMA-BLACKLIST-DIMS-<unique_metric_name_prefix> =
<dimension_field1>, <dimension_field2>,...
* Optional
* This configuration enables the Splunk platform to omit unnecessary
dimensions
when it transforms log data to metrics data. You might set this up if
you
have high-cardinality dimensions that are unnecessary for your
metrics.
* Use this configuration in conjunction with a corresponding
METRIC-SCHEMA-MEASURES-<unique_metric_name_prefix> configuration.
* <unique_metric_name_prefix> should match the value of a field
extracted from
the log event.
* <dimension_field> should match the name of a field in the log event
that is
not extracted as a <measure_field> in the corresponding
METRIC-SCHEMA-
MEASURES-<unique_metric_name_prefix> configuration.
* Default: empty
856
METRIC-SCHEMA-BLACKLIST-DIMS = <dimension_field1>,
<dimension_field2>,...
* Optional
* This configuration enables the Splunk platform to omit unnecessary
dimensions
when it transforms log data to metrics data. You might set this up if
you
have high-cardinality dimensions that are unnecessary for your
metrics.
* Use this configuration in conjunction with a corresponding
METRIC-SCHEMA-MEASURES configuration.
* <dimension_field> should match the name of a field in the log event
that is
not extracted as a <measure_field> in the corresponding
METRIC-SCHEMA-
MEASURES configuration.
* Default: empty
KEYS:
* NOTE: Keys are case-sensitive. Use the following keys exactly as they
appear.
outputs.conf)
857
Defaults to groups present in
'defaultGroup' for [tcpout].
* NOTE: Any KEY (field name) prefixed by '_' is not indexed by Splunk
software, in general.
[accepted_keys]
<name> = <key>
transforms.conf.example
# Version 7.2.1
#
# This is an example transforms.conf. Use this file to create regexes
and
# rules for transforms. Use this file in tandem with props.conf.
#
# To use one or more of these configurations, copy the configuration
block
# into transforms.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
858
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own
customizations.
# Indexed field:
[netscreen-error]
REGEX = device_id=\[w+\](?<err_code>[^:]+)
FORMAT = err_code::$1
WRITE_META = true
# Override host:
[hostoverride]
DEST_KEY = MetaData:Host
REGEX = \s(\w*)$
FORMAT = host::$1
# Extracted fields:
[netscreen-error-field]
REGEX = device_id=\[w+\](?<err_code>[^:]+)
FORMAT = err_code::$1
# Index-time evaluations:
[discard-long-lines]
INGEST_EVAL = queue=if(length(_raw) > 500, "nullQueue", "")
[split-into-sixteen-indexes-for-no-good-reason]
INGEST_EVAL = index="split_" . substr(md5(_raw),1,1)
[add-two-numeric-fields]
INGEST_EVAL = loglen_raw=ln(length(_raw)),
loglen_src=ln(length(source))
# In this example we only create the new index-time field if the host
# had a dot in it; assigning null() to a new field is a no-op:
[add-hostdomain-field]
INGEST_EVAL = hostdomain=if(host LIKE "%.%",
replace(host,"^[^\\.]+\\.",""), null())
[mylookuptable]
filename = mytable.csv
859
# guarantees that we output a single lookup value for each input value,
if
# no match exists, we use the value of "default_match", which by
default is
# "NONE"
[mylook]
filename = mytable.csv
max_matches = 1
min_matches = 1
default_match = nothing
[myexternaltable]
external_cmd = testadapter.py blah
fields_list = foo bar
[staticwtime]
filename = mytable.csv
time_field = timestamp
time_format = %d/%m/%y %H:%M:%S
[session-anonymizer]
REGEX = (?m)^(.*)SessionId=\w+(\w{4}[&"].*)$
FORMAT = $1SessionId=########$2
DEST_KEY = _raw
[AppRedirect]
REGEX = Application
DEST_KEY = _MetaData:Index
FORMAT = Verbose
[extract_csv]
DELIMS = ","
FIELDS = "field1", "field2", "field3"
860
# This example assigns the extracted values from _raw to field1, field2
and
# field3 (in order of extraction). If more than three values are
extracted
# the values without a matching field name are ignored.
[pipe_eq]
DELIMS = "|", "="
# The above example extracts key-value pairs which are separated by '|'
# while the key is delimited from value by '='.
[multiple_delims]
DELIMS = "|;", "=:"
# The above example extracts key-value pairs which are separated by '|'
or
# ';', while the key is delimited from value by '=' or ':'.
[all_lazy]
REGEX = .*?
[all]
REGEX = .*
[nspaces]
# matches one or more NON space characters
REGEX = \S+
[alphas]
# matches a string containing only letters a-zA-Z
REGEX = [a-zA-Z]+
[alnums]
# matches a string containing letters + digits
REGEX = [a-zA-Z0-9]+
[qstring]
# matches a quoted "string" - extracts an unnamed variable
# name MUST be provided as in [[qstring:name]]
# Extracts: empty-name-group (needs name)
REGEX = "(?<>[^"]*+)"
861
[sbstring]
# matches a string enclosed in [] - extracts an unnamed variable
# name MUST be provided as in [[sbstring:name]]
# Extracts: empty-name-group (needs name)
REGEX = \[(?<>[^\]]*+)\]
[digits]
REGEX = \d+
[int]
# matches an integer or a hex number
REGEX = 0x[a-fA-F0-9]+|\d+
[float]
# matches a float (or an int)
REGEX = \d*\.\d+|[[int]]
[octet]
# this would match only numbers from 0-255 (one octet in an ip)
REGEX = (?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)
[ipv4]
# matches a valid IPv4 optionally followed by :port_num the octets in
the ip
# would also be validated 0-255 range
# Extracts: ip, port
REGEX = (?<ip>[[octet]](?:\.[[octet]]){3})(?::[[int:port]])?
[simple_url]
# matches a url of the form proto://domain.tld/uri
# Extracts: url, domain
REGEX = (?<url>\w++://(?<domain>[a-zA-Z0-9\-.:]++)(?:/[^\s"]*)?)
[url]
# matches a url of the form proto://domain.tld/uri
# Extracts: url, proto, domain, uri
REGEX =
(?<url>[[alphas:proto]]://(?<domain>[a-zA-Z0-9\-.:]++)(?<uri>/[^\s"]*)?)
[simple_uri]
# matches a uri of the form /path/to/resource?query
# Extracts: uri, uri_path, uri_query
REGEX = (?<uri>(?<uri_path>[^\s\?"]++)(?:\\?(?<uri_query>[^\s"]+))?)
[uri]
# uri = path optionally followed by query
[/this/path/file.js?query=part&other=var]
# path = root part followed by file [/root/part/file.part]
# Extracts: uri, uri_path, uri_root, uri_file, uri_query, uri_domain
(optional if in proxy mode)
REGEX =
862
(?<uri>(?:\w++://(?<uri_domain>[^/\s]++))?(?<uri_path>(?<uri_root>/+(?:[^\s\?;=/]*+/+)*)
[hide-ip-address]
# Make a clone of an event with the sourcetype masked_ip_address. The
clone
# will be modified; its text changed to mask the ip address.
# The cloned event will be further processed by index-time transforms
and
# SEDCMD expressions according to its new sourcetype.
# In most scenarios an additional transform would be used to direct the
# masked_ip_address event to a different index than the original data.
REGEX = ^(.*?)src=\d+\.\d+\.\d+\.\d+(.*)$
FORMAT = $1src=XXXXX$2
DEST_KEY = _raw
CLONE_SOURCETYPE = masked_ip_addresses
[statsd-dims:regex_stanza2]
863
REGEX = \S+\.(?<os>\w+):
REMOVE_DIMS_FROM_METRIC_NAME = true
# In the sample log above, group=queue represents the unique metric name
prefix. Hence, it needs to be
# formatted and saved as metric_name::queue for Splunk to identify
queue as a metric name prefix.
[extract_group]
REGEX = group=(\w+)
FORMAT = metric_name::$1
WRITE_META = true
864
[extract_name]
REGEX = name=(\w+)
FORMAT = name::$1
WRITE_META = true
[extract_max_size_kb]
REGEX = max_size_kb=(\w+)
FORMAT = max_size_kb::$1
WRITE_META = true
[extract_current_size_kb]
REGEX = current_size_kb=(\w+)
FORMAT = current_size_kb::$1
WRITE_META = true
[extract_current_size]
REGEX = max_size_kb=(\w+)
FORMAT = max_size_kb::$1
WRITE_META = true
[extract_largest_size]
REGEX = largest_size=(\w+)
FORMAT = largest_size::$1
WRITE_META = true
[extract_smallest_size]
REGEX = smallest_size=(\w+)
FORMAT = smallest_size::$1
WRITE_META = true
ui-prefs.conf
The following are the spec and example files for ui-prefs.conf.
ui-prefs.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for ui preferences
for a
# view.
#
# There is a default ui-prefs.conf in $SPLUNK_HOME/etc/system/default.
To set
865
# custom configurations, place a ui-prefs.conf in
# $SPLUNK_HOME/etc/system/local/. To set custom configuration for an
app, place
# ui-prefs.conf in $SPLUNK_HOME/etc/apps/<app_name>/local/. For
examples, see
# ui-prefs.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<stanza name>]
* Stanza name is the name of the xml view file
dispatch.earliest_time =
dispatch.latest_time =
866
display.prefs.enableMetaData = 0 | 1
display.prefs.showDataSummary = 0 | 1
display.prefs.customSampleRatio = <int>
display.prefs.showSPL = 0 | 1
display.prefs.livetail = 0 | 1
# General options
display.general.enablePreview = 0 | 1
# Event options
display.events.fields = <string>
display.events.type = [raw|list|table]
display.events.rowNumbers = 0 | 1
display.events.maxLines = [0|5|10|20|50|100|200]
display.events.raw.drilldown = [inner|outer|full|none]
display.events.list.drilldown = [inner|outer|full|none]
display.events.list.wrap = 0 | 1
display.events.table.drilldown = 0 | 1
display.events.table.wrap = 0 | 1
# Statistics options
display.statistics.rowNumbers = 0 | 1
display.statistics.wrap = 0 | 1
display.statistics.drilldown = [row|cell|none]
# Visualization options
display.visualizations.type = [charting|singlevalue]
display.visualizations.custom.type = <string>
display.visualizations.chartHeight = <int>
display.visualizations.charting.chart =
[line|area|column|bar|pie|scatter|radialGauge|fillerGauge|markerGauge]
display.visualizations.charting.chart.style = [minimal|shiny]
display.visualizations.charting.legend.labelStyle.overflowMode =
[ellipsisEnd|ellipsisMiddle|ellipsisStart]
# Patterns options
display.page.search.patterns.sensitivity = <float>
# Page options
display.page.search.mode = [fast|smart|verbose]
display.page.search.timeline.format = [hidden|compact|full]
display.page.search.timeline.scale = [linear|log]
display.page.search.showFields = 0 | 1
867
display.page.home.showGettingStarted = 0 | 1
display.page.search.searchHistoryTimeFilter = [0|@d|-7d@d|-30d@d]
display.page.search.searchHistoryCount = [10|20|50]
ui-prefs.conf.example
# Version 7.2.1
#
# This file contains example of ui preferences for a view.
#
# To use one or more of these configurations, copy the configuration
block into
# ui-prefs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# The following ui preferences will default timerange picker on the
search page
# from All time to Today We will store this ui-prefs.conf in
# $SPLUNK_HOME/etc/apps/search/local/ to only update search view of
search app.
[search]
dispatch.earliest_time = @d
dispatch.latest_time = now
ui-tour.conf
The following are the spec and example files for ui-tour.conf.
ui-tour.conf.spec
# Version 7.2.1
#
# This file contains the tours available for Splunk Onboarding
#
# There is a default ui-tour.conf in $SPLUNK_HOME/etc/system/default.
# To create custom tours, place a ui-tour.conf in
868
# $SPLUNK_HOME/etc/system/local/. To create custom tours for an app,
place
# ui-tour.conf in $SPLUNK_HOME/etc/apps/<app_name>/local/.
#
# To learn more about configuration files (including precedence) see
the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
[<stanza name>]
* Stanza name is the name of the tour
useTour = <string>
* Used to redirect this tour to another when called by Splunk.
* Optional
nextTour = <string>
* String used to determine what tour to start when current tour is
finished.
* Optional
intro = <string>
* A custom string used in a modal to describe what tour is about to be
taken.
* Optional
label = <string>
* The identifying name for this tour used in the tour creation app.
* Optional in general
869
* Required only if this tour is being to linked from another tour
(nextTour)
tourPage = <string>
* The Splunk view this tour is associated with (only necessary if it is
linked to).
* Optional
managerPage = <boolean>
* Used to signifiy that the tourPage is a manager page. This will change
the url of
* when the tourPage is rendered to "/manager/{app}/{view}" rather than
"/app/{app}/{view}"
* Optional
viewed = <boolean>
* A boolean to determine if this tour has been viewed by a user.
* Set by Splunk
skipText = <string>
* The string for the skip button (interactive and image)
* Defaults to "Skip tour"
* Optional
doneText = <string>
* The string for the button at the end of a tour (interactive and
image)
* Defaults to "Try it now"
* Optional
doneURL = <string>
* The Splunk URL of where the user will be directed once the tour is
over.
* The user will click a link/button.
* Helpful to use with above doneText attribute to specify location.
* Splunk link is formed after the localization portion of the full URL.
For example if the link
* is localhost:8000/en-US/app/search/reports, the doneURL will be
"app/search/reports"
* Optional
forceTour = <boolean>
* Used with auto tours to force users to take the tour and not be able
to skip
* Optional
# Users can list as many images with captions as they want. Each new
870
image is created by
# incrementing the number.
imageName<int> = <string>
* The name of the image file (example.png)
* Required but Optional only after first is set
imageCaption<int> = <string>
* The caption string for corresponding image
* Optional
imgPath = <string>
* The subdirectory relative to Splunk's 'img' directory in which users
put the images.
This will be appended to the url for image access and not make a
server request within Splunk.
EX) If user puts images in a subdirectory 'foo': imgPath = foo.
EX) If within an app, imgPath = foo will point to the app's img path
of
appserver/static/img/foo
* Required only if images are not in the main 'img' directory.
# Users can list as many steps with captions as they want. Each new step
is created by
# incrementing the number.
urlData = <string>
* String of any querystring variables used with tourPage to create full
url executing this tour.
* Don't add the "?" to the beginning of this string
* Optional
stepText<int> = <string>
* The string used in specified step to describe the UI being showcased.
* Required but Optional only after first is set
stepElement<int> = <selector>
* The UI Selector used for highlighting the DOM element for
corresponding step.
* Optional
871
stepPosition<int> = <bottom || right || left || top>
* String that sets the position of the tooltip for corresponding step.
* Optional
stepClickElement<int> = <string>
* The UI selector used for a DOM element used in conjunction with click
above.
* Optional
ui-tour.conf.example
# Version 7.2.1
#
# This file contains the tours available for Splunk Onboarding
#
# To update tours, copy the configuration block into
# ui-tour.conf in $SPLUNK_HOME/etc/system/local/. Restart the Splunk
software to
# see the changes.
#
# To learn more about configuration files (including precedence) see
the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# Image Tour
[tour-name]
type = image
imageName1 = TourStep1.png
imageCaption1 = This is the first caption
imageName2 = TourStep2.png
imageCaption2 = This is the second caption
imgPath = /testtour
context = system
doneText = Continue to Tour Page
doneURL = app/toursapp/home
# Interactive Tour
[test-interactive-tour]
type = interactive
tourPage = reports
urlData = data=foo&moredata=bar
872
label = Interactive Tour Test
stepText1 = Welcome to this test tour
stepText2 = This is the first step in the tour
stepElement2 = .test-selector
stepText3 = This is the second step in the tour
stepElement3 = .test-selector
stepClickEvent3 = mousedown
stepClickElement3 = .test-click-element
forceTour = 1
user-prefs.conf
The following are the spec and example files for user-prefs.conf.
user-prefs.conf.spec
# Version 7.2.1
#
# This file describes some of the settings that are used, and
# can be configured on a per-user basis for use by the Splunk Web UI.
# Settings in this file are requested with user and application scope of
the
# relevant user, and the user-prefs app.
# This means interactive setting of these values will cause the values
to be
# updated in
# $SPLUNK_HOME/etc/users/<username>/user-prefs/local/user-prefs.conf
where
# <username> is the username for the user altering their preferences.
# It also means that values in another app will never be used unless
they
# are exported globally (to system scope) or to the user-prefs app.
873
[general]
tz = <timezone>
* Specifies the per-user timezone to use
* If unset, the timezone of the Splunk Server or Search Head is used.
* Only canonical timezone names such as America/Los_Angeles should be
used (for best results use the Splunk UI).
* Defaults to unset.
lang = <language>
* Specifies the per-user language preference for non-webui operations,
where
multiple tags are separated by commas.
* If unset, English "en-US" will be used when required.
* Only tags used in the "Accept-Language" HTTP header will be allowed,
such as
"en-US" or "fr-FR".
* Fuzzy matching is supported, where "en" will match "en-US".
* Optional quality settings is supported, such as
"en-US,en;q=0.8,fr;q=0.6"
* Defaults to unset.
install_source_checksum = <string>
* Records a checksum of the tarball from which a given set of private
user
configurations was installed.
* Analogous to <install_source_checksum> in app.conf.
search_syntax_highlighting = [light|dark|black-white]
* Highlights different parts of a search string with different colors.
* Defaults to light.
* Dashboards ignore this setting.
search_use_advanced_editor = <boolean>
* Specifies whether the search bar is run using the advanced editor or
in just plain text.
* If set to false, search_auto_format, and search_line_numbers will be
false and search_assistant can only be [full|none].
* Defaults to true.
search_assistant = [full|compact|none]
* Specifies the type of search assistant to use when constructing a
search.
874
* Defaults to compact.
search_auto_format = <boolean>
* Specifies if auto-format is enabled in the search input.
* Default to false.
search_line_numbers = <boolean>
* Display the line numbers with the search.
* Defaults to false.
datasets:showInstallDialog = <boolean>
* Flag to enable/disable the install dialog for the datasets addon
* Defaults to true
dismissedInstrumentationOptInVersion = <integer>
* Set by splunk_instrumentation app to its current value of
optInVersion when the opt-in modal is dismissed.
hideInstrumentationOptInModal = <boolean>
* Set to 1 by splunk_instrumentation app when the opt-in modal is
dismissed.
[default]
[general_default]
default_earliest_time = <string>
default_latest_time = <string>
* Sets the global default time range across all apps, users, and roles
on the search page.
[role_<name>]
<name> = <value>
user-prefs.conf.example
# Version 7.2.1
#
# This is an example user-prefs.conf. Use this file to configure
settings
875
# on a per-user basis for use by the Splunk Web UI.
#
# To use one or more of these configurations, copy the configuration
block
# into user-prefs.conf in $SPLUNK_HOME/etc/system/local/. You must
restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own
# customizations.
# EXAMPLE: Setting the default timezone to GMT for all Power and User
role
# members, and setting a different language preference for each.
[role_power]
tz = GMT
lang = en-US
[role_user]
tz = GMT
lang = fr-FR,fr-CA;q=0
user-seed.conf
The following are the spec and example files for user-seed.conf.
user-seed.conf.spec
# Version 7.2.1
#
# Specification for user-seed.conf. Allows configuration of Splunk's
# initial username and password. Currently, only one user can be
configured
# with user-seed.conf.
#
# Specification for user-seed.conf. Allows configuration of Splunk's
initial username and password.
# Currently, only one user can be configured with user-seed.conf.
876
#
# To set the default username and password, place user-seed.conf in
# $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable
configurations.
# If the $SPLUNK_HOME/etc/passwd file is present, the settings in this
file (user-seed.conf) are not used.
#
# Use HASHED_PASSWORD for a more secure installation. To hash a
clear-text password,
# use the 'splunk hash-passwd' command then copy the output to this
file.
#
# If a clear text password is set (not recommended) and last character
is '\', it should
# be followed by a space for value to be read correctly. Password does
not include extra
# space at the end, it is required to ignore the special meaning of
backslash in conf file.
#
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[user_info]
* Default is Admin.
USERNAME = <string>
* Username you want to associate with a password.
* Default is Admin.
PASSWORD = <password>
* Password you wish to set for that user.
* Password must meet complexity requirements.
user-seed.conf.example
# Version 7.2.1
#
# This is an example user-seed.conf. Use this file to create an
initial login.
877
#
# NOTE: When starting Splunk for first time, hash of password is stored
in
# $SPLUNK_HOME/etc/system/local/user-seed.conf and password file is
seeded
# with this hash. This file can also be used to set default username
and
# password, if $SPLUNK_HOME/etc/passwd is not present. If the
$SPLUNK_HOME/etc/passwd
# file is present, the settings in this file (user-seed.conf)
# are not used.
#
# To use this configuration, copy the configuration block into
user-seed.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[user_info]
USERNAME = admin
HASHED_PASSWORD =
$6$TOs.jXjSRTCsfPsw$2St.t9lH9fpXd9mCEmCizWbb67gMFfBIJU37QF8wsHKSGud1QNMCuUdWkD8IFSgCZr5.
viewstates.conf
The following are the spec and example files for viewstates.conf.
viewstates.conf.spec
# Version 7.2.1
#
# This file explains how to format viewstates.
#
# To use this configuration, copy the configuration block into
# viewstates.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
878
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<view_name>:<viewstate_id>]
<module_id>.<setting_name> = <string>
* The <module_id> is the runtime id of the UI module requesting
persistence
* The <setting_name> is the setting designated by <module_id> to persist
viewstates.conf.example
# Version 7.2.1
#
# This is an example viewstates.conf.
#
# To learn more about configuration files (including precedence) please
see
879
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[charting:g3b5fa7l]
ChartTypeFormatter_0_7_0.default = area
Count_0_6_0.count = 10
LegendFormatter_0_13_0.default = right
LineMarkerFormatter_0_10_0.default = false
NullValueFormatter_0_12_0.default = gaps
[*:g3jck9ey]
Count_0_7_1.count = 20
DataOverlay_0_12_0.dataOverlayMode = none
DataOverlay_1_13_0.dataOverlayMode = none
FieldPicker_0_6_1.fields = host sourcetype source date_hour date_mday
date_minute date_month
FieldPicker_0_6_1.sidebarDisplay = True
FlashTimeline_0_5_0.annotationSearch = search index=twink
FlashTimeline_0_5_0.enableAnnotations = true
FlashTimeline_0_5_0.minimized = false
MaxLines_0_13_0.maxLines = 10
RowNumbers_0_12_0.displayRowNumbers = true
RowNumbers_1_11_0.displayRowNumbers = true
RowNumbers_2_12_0.displayRowNumbers = true
Segmentation_0_14_0.segmentation = full
SoftWrap_0_11_0.enable = true
[dashboard:_current]
TimeRangePicker_0_1_0.selected = All time
visualizations.conf
The following are the spec and example files for visualizations.conf.
visualizations.conf.spec
# Version 7.2.1
#
# This file contains definitions for visualizations an app makes
available
# to the system. An app intending to share visualizations with the
system
# should include a visualizations.conf in
$SPLUNK_HOME/etc/apps/<appname>/default
#
880
# visualizations.conf should include one stanza for each visualization
to be shared
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#*******
# The possible attribute/value pairs for visualizations.conf are:
#*******
[<stanza name>]
disabled = <bool>
* Optional.
* Disable the visualization by setting to true.
* If set to true, the visualization is not available anywhere in Splunk
* Defaults to false.
allow_user_selection = <bool>
* Optional.
* Whether the visualization should be available for users to select
* Defaults to true
label = <string>
* Required.
* The human-readable label or title of the visualization
* Will be used in dropdowns and lists as the name of the visualization
* Defaults to <app_name>.<viz_name>
description = <string>
* Required.
* The short description that will show up in the visualization picker
* Defaults to ""
search_fragment = <string>
* Required.
* An example part of a search that formats the data correctly for the
viz. Typically the last pipe(s) in a search query.
* Defaults to ""
881
default_height = <int>
* Optional.
* The default height of the visualization in pixels
* Defaults to 250
default_width = <int>
* Optional.
* The default width of the visualization in pixels
* Defaults to 250
min_height = <int>
* Optional.
* The minimum height the visualizations can be rendered in.
* Defaults to 50.
min_width = <int>
* Optional.
* The minimum width the visualizations can be rendered in.
* Defaults to 50.
max_height = <int>
* The maximum height the visualizations supports.
* Optional.
* Default is unbounded.
max_width = <int>
* The maximum width the visualizations supports.
* Optional.
* Default is unbounded.
trellis_default_height = <int>
* Default is 400
trellis_min_widths = <string>
* Default is undefined
trellis_per_row = <string>
* Default is undefined
data_sources = <csv-list>
* Comma separated list of data source types supported by the
visualization.
* Currently the visualization system provides these types of data
sources:
* - primary: Main data source driving the visualization.
* - annotation: Additional data source for time series visualizations to
show discrete event annotation on the time axis.
* Defaults to "primary"
882
data_sources.<data-source-type>.params.output_mode =
[json_rows|json_cols|json]
* Optional.
* the data format that the visualization expects. One of:
* - "json_rows": corresponds to
SplunkVisualizationBase.ROW_MAJOR_OUTPUT_MODE
* - "json_cols": corresponds to
SplunkVisualizationBase.COLUMN_MAJOR_OUTPUT_MODE
* - "json": corresponds to SplunkVisualizationBase.RAW_OUTPUT_MODE
* Defaults to undefined and requires the javascript implementation to
supply initial data params.
data_sources.<data-source-type>.params.count = <int>
* Optional.
* How many rows of results to request, default is 1000
data_sources.<data-source-type>.params.offset = <int>
* Optional.
* The index of the first requested result row, default is 0
data_sources.<data-source-type>.params.sort_key = <string>
* Optional.
* The field name to sort the results by
data_sources.<data-source-type>.params.sort_direction = [asc|desc]
* Optional.
* The direction of the sort
* - asc: sort in ascending order
* - desc: sort in descending order
* Defaults to desc
data_sources.<data-source-type>.params.search = <string>
* Optional.
* A post-processing search to apply to generate the results
data_sources.<data-source-type>.mapping_filter = <bool>
data_sources.<data-source-type>.mapping_filter.center = <string>
data_sources.<data-source-type>.mapping_filter.zoom = <string>
supports_trellis = <bool>
* Optional.
* Indicates whether trellis layout is available for this visualization
* Defaults to false
supports_drilldown = <bool>
* Optional.
* Indicates whether the visualization supports drilldown (responsive
actions triggered when users click on the visualization).
* Defaults to false
supports_export = <bool>
883
* Optional.
* Indicates whether the visualization supports being exported to PDF.
* This setting has no effect in third party visualizations.
* Defaults to false
visualizations.conf.example
No example
web.conf
The following are the spec and example files for web.conf.
web.conf.spec
# Version 7.2.1
#
# This file contains possible attributes and values you can use to
configure
# the Splunk Web interface.
#
# There is a web.conf in $SPLUNK_HOME/etc/system/default/. To set
custom
# configurations, place a web.conf in $SPLUNK_HOME/etc/system/local/.
For
# examples, see web.conf.example. You must restart Splunk software to
enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see
884
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[settings]
* Set general Splunk Web configuration options under this stanza name.
* Follow this stanza name with any number of the following
setting/value
pairs.
* If you do not specify an entry for each setting, Splunk Web uses the
default value.
startwebserver = [0 | 1]
* Set whether or not to start Splunk Web.
* 0 disables Splunk Web, 1 enables it.
* Default: 1
885
in server.conf for more information.
* Default: 8065
splunkdConnectionTimeout = <integer>
* The amount of time, in seconds, to wait before timing out when
communicating with
splunkd.
* Must be at least 30. If not
* Values smaller than 30 will be ignored, resulting in the use of the
default value
* Default: 30
enableSplunkWebClientNetloc = <boolean>
* Control if the splunk web client can override the client network
location
* Default: false
enableSplunkWebSSL = <boolean>
* Toggle between http or https.
* Set to true to enable https and SSL.
* Default: false
privKeyPath = <path>
* The path to the file containing the web server SSL certificate
private key.
* A relative path is interpreted relative to $SPLUNK_HOME and may not
refer
outside of $SPLUNK_HOME (e.g., no ../somewhere).
* You can also specify an absolute path to an external key.
* See also 'enableSplunkWebSSL' and 'serverCert'.
* No default.
serverCert = <path>
* Full path to the Privacy Enhanced Mail (PEM) format Splunk web server
certificate file.
* The file may also contain root and intermediate certificates, if
required.
They should be listed sequentially in the order:
[ Server SSL certificate ]
[ One or more intermediate certificates, if required ]
[ Root certificate, if required ]
* See also 'enableSplunkWebSSL' and 'privKeyPath'.
* Default: $SPLUNK_HOME/etc/auth/splunkweb/cert.pem
sslPassword = <password>
* Password that protects the private key specified by 'privKeyPath'.
* If encrypted private key is used, do not enable client-authentication
on splunkd server. In [sslConfig] stanza of server.conf,
'requireClientCert' must be 'false'.
* Optional.
* Default: The unencrypted private key.
886
caCertPath = <path>
* DEPRECATED.
* Use 'serverCert' instead.
* A relative path is interpreted relative to $SPLUNK_HOME and may not
refer
outside of $SPLUNK_HOME (e.g., no ../somewhere).
* No default.
requireClientCert = <boolean>
* Requires that any HTTPS client that connects to the Splunk Web HTTPS
server has a certificate that was signed by the CA cert installed
on this server.
* If "true", a client can connect ONLY if a certificate created by our
certificate authority was used on that client.
* If "true", it is mandatory to configure splunkd with same root CA in
server.conf.
This is needed for internal communication between splunkd and
splunkweb.
* Default: false
serviceFormPostURL = http://docs.splunk.com/Documentation/Splunk
* DEBPRECATED.
* This setting has been deprecated since Splunk Enterprise version
5.0.3.
userRegistrationURL = https://www.splunk.com/page/sign_up
updateCheckerBaseURL = http://quickdraw.Splunk.com/js/
docsCheckerBaseURL = http://quickdraw.splunk.com/help
* These are various Splunk.com urls that are configurable.
* Setting 'updateCheckerBaseURL' to 0 stops Splunk Web from pinging
Splunk.com for new versions of Splunk software.
887
enable_insecure_login = <boolean>
* Whether or not the GET-based "/account/insecurelogin" REST endpoint
is enabled.
* Provides an alternate GET-based authentication mechanism.
* If "true", the following url is available:
http://localhost:8000/en-US/account/insecurelogin?loginType=splunk&username=noc&password
* If "false", only the main /account/login endpoint is available
* Default: false
simple_error_page = <boolean>
* Whether or not to display a simplified error page for HTTP errors that
only contains the error status.
* If set to "true", Splunk Web displays a simplified error page for
errors (404, 500, etc.) that only contain the error status.
* If set to "false", Splunk Web displays a more verbose error page that
contains the home link, message, a more_results_link, crashes,
referrer, debug output, and byline
* Default: false
login_content = <string>
* Lets you add custom content to the login page.
* Supports any text including HTML.
* No default.
supportSSLV3Only = <boolean>
* When 'appServerPorts' is set to a non-zero value (the default mode),
this setting is DEPRECATED. SSLv2 is now always disabled.
The exact set of SSL versions allowed is now configurable via the
'sslVersions' setting above.
* When 'appServerPorts' is set to 0, this controls whether SSLv2
connections are disallowed.
* Default (when 'appServerPorts' is set to 0): false
888
cipherSuite = <cipher suite string>
* If set, uses the specified cipher string for the HTTP server.
* If not set, uses the default cipher string provided by OpenSSL. This
is
used to ensure that the server does not accept connections using weak
encryption protocols.
* Must specify 'dhFile' to enable any Diffie-Hellman ciphers.
* The default can vary. See the cipherSuite setting in
* $SPLUNK_HOME/etc/system/default/web.conf for the current default.
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the Elliptic Curve Diffie-Hellman (ECDH) curve
to
use for ECDH key negotiation.
* Splunk only supports named curves that have been specified by their
SHORT name.
* The list of valid named curves by their short and long names
can be obtained by running this CLI command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
dhFile = <path>
* Full path to the Diffie-Hellman parameter file.
* Relative paths are interpreted as relative to $SPLUNK_HOME, and must
not refer to a location outside of $SPLUNK_HOME.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Default: not set.
root_endpoint = <URI_prefix_string>
* Defines the root URI path on which the appserver will listen
* For example, if you want to proxy the splunk UI at
http://splunk:8000/splunkui,
then set root_endpoint = /splunkui
* Default: /
889
static_endpoint = <URI_prefix_string>
* Path to static content.
* The path here is automatically appended to root_endpoint defined
above
* Default: /static
static_dir = <relative_filesystem_path>
* The directory that holds the static content
* This can be an absolute URL if you want to put it elsewhere
* Default: share/splunk/search_mrsparkle/exposed
rss_endpoint = <URI_prefix_string>
* Path to static rss content
* The path here is automatically appended to what you defined in the
'root_endpoint' setting
* Default: /rss
embed_uri = <URI>
* Optional URI scheme/host/port prefix for embedded content
* This presents an optional strategy for exposing embedded shared
content that does not require authentication in a reverse proxy/single
sign on environment.
* Default: empty string, resolves to the client
window.location.protocol + "//" + window.location.host
embed_footer = <html_string>
* A block of HTML code that defines the footer for an embedded report.
* Any valid HTML code is acceptable.
* Default: "splunk>"
tools.staticdir.generate_indexes = [1 | 0]
* Whether or not the webserver serves a directory listing for static
directories.
* Default: 0 (false)
template_dir = <relative_filesystem_path>
* The base path to the Mako templates.
* Default: "share/splunk/search_mrsparkle/templates"
module_dir = <relative_filesystem_path>
* The base path to Splunk Web module assets.
* Default: "share/splunk/search_mrsparkle/modules"
enable_gzip = <boolean>
* Whether or not the webserver applies gzip compression to responses.
* Default: true
use_future_expires = <boolean>
* Whether or not the Expires header of /static files is set to a
far-future date
* Default: true
890
flash_major_version = <integer>
flash_minor_version = <integer>
flash_revision_version = <integer>
* Specifies the minimum Flash plugin version requirements
* Flash support, broken into three parts.
* We currently require a min baseline of Shockwave Flash 9.0 r124
override_JSON_MIME_type_with_text_plain = <boolean>
* Whether or not to override the MIME type for JSON data served up
by Splunk Web endpoints with content-type="text/plain; charset=UTF-8"
* If "true", Splunk Web endpoints (other than proxy) that serve JSON
data will
serve as "text/plain; charset=UTF-8"
* If "false", Splunk Web endpoints that serve JSON data will serve as
"application/json; charset=UTF-8"
enable_proxy_write = <boolean>
* Indicates if the /splunkd proxy endpoint allows POST operations.
* If "true", both GET and POST operations are proxied through to
splunkd.
* If "false", only GET operations are proxied through to splunkd.
* Setting to "false" prevents many client-side packages (such as the
Splunk JavaScript SDK) from working correctly.
* Default: true
js_logger_mode_server_end_point = <URI_relative_path>
* The server endpoint to post JavaScript log messages
* Used when js_logger_mode = Server
* Default: util/log/js
js_logger_mode_server_poll_buffer = <integer>
* The interval, in milliseconds, to check, post, and cleanse the
JavaScript log buffer
* Default: 1000
js_logger_mode_server_max_buffer = <integer>
* The maximum size threshold, in megabytes, to post and cleanse the
JavaScript log buffer
891
* Default: 100
ui_inactivity_timeout = <integer>
* The length of time lapsed, in minutes, for notification when
there is no user interface clicking, mouseover, scrolling, or
resizing.
* Notifies client side pollers to stop, resulting in sessions expiring
at
the 'tools.sessions.timeout' value.
* If less than 1, results in no timeout notification ever being
triggered
(Sessions stay alive for as long as the browser is open).
* Default: 60
js_no_cache = <boolean>
* DEPRECATED.
* Toggles the JavaScript cache control.
* Default: false
cacheBytesLimit = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory.
* When the total size of the objects in cache grows larger than this
setting,
in bytes, splunkd begins ageing entries out of the cache.
* If set to zero, disables the cache.
* Default: 4194304
cacheEntriesLimit = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory.
* When the number of the objects in cache grows larger than this,
splunkd begins ageing entries out of the cache.
* If set to zero, disables the cache.
* Default: 16384
staticCompressionLevel = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory.
* Splunkd stores these assets in a compressed format, and the assets can
usually be served directly to the web browser in compressed format.
* This level can be a number between 1 and 9. Lower numbers use less
CPU time to compress objects, but the resulting compressed objects
will be larger.
* There is not much benefit to decreasing the value of this setting
from
its default. Not much CPU time is spent compressing the objects.
* Default: 9
enable_autocomplete_login = <boolean>
* Indicates if the main login page lets browsers autocomplete the
username.
892
* If "true", browsers may display an autocomplete drop down in the
username field.
* If "false", browsers may not show autocomplete drop down in the
username field.
* Default: false
verifyCookiesWorkDuringLogin = <boolean>
* Normally, the login page makes an attempt to see if cookies work
properly in the user's browser before allowing them to log in.
* If you set this to "false", this check is skipped.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Do not set to "false" in normal operations.
* Default: true
minify_js = <boolean>
* Whether the static JavaScript files for modules are consolidated and
minified.
* Setting this to "true" improves client-side performance by reducing
the number of HTTP
requests and the size of HTTP responses.
minify_css = <boolean>
* Indicates whether the static CSS files for modules are consolidated
and
minified
* Setting this to "true" improves client-side performance by reducing
the number of HTTP
requests and the size of HTTP responses.
* Due to browser limitations, disabling this when using Internet
Explorer
version 9 and earlier might result in display problems.
trap_module_exceptions = <boolean>
* Whether or not the JavaScript for individual modules is wrapped in a
try/catch
* If "true", syntax errors in individual modules do not cause the UI to
hang, other than when using the module in question.
* Set to "false" when developing apps.
enable_pivot_adhoc_acceleration = <boolean>
* DEPRECATED in version 6.1 and later, use
'pivot_adhoc_acceleration_mode'
instead
* Whether or not the pivot interface uses its own ad-hoc acceleration
when a data model is not accelerated.
* If "true", the pivot interface uses ad-hoc acceleration to make
reporting
in pivot faster and more responsive.
* In situations where data is not stored in time order, or where the
majority
of events are far in the past, disabling this behavior can improve
893
the
pivot experience.
jschart_test_mode = <boolean>
* Whether or not the JSChart module runs in Test Mode.
* If "true", JSChart module attaches HTML classes to chart elements for
introspection.
* This negatively impacts performance and should be disabled unless you
are actively using JSChart Test Mode.
#
# To avoid browser performance impacts, the JSChart library limits
# the amount of data rendered in an individual chart.
jschart_truncation_limit = <integer>
* Cross-broswer truncation limit.
* If set, takes precedence over the browser-specific limits below
jschart_truncation_limit.chrome = <integer>
* Chart truncation limit.
* For Chrome only.
* Default: 50000
jschart_truncation_limit.firefox = <integer>
* Chart truncation limit.
* For Firefox only.
* Default: 50000
jschart_truncation_limit.safari = <integer>
* Chart truncation limit.
* For Safari only.
* Default: 50000
jschart_truncation_limit.ie11 = <integer>
* Chart truncation limit.
* For Internet Explorer version 11 only
894
* Default: 50000
jschart_series_limit = <integer>
* Chart series limit for all browsers.
* Default: 100
jschart_results_limit = <integer>
* DEPRECATED.
* Use 'data_sources.primary.params.count' in visualizations.conf
instead.
* Chart results per series limit for all browsers.
* Overrides the results per series limit for individual visualizations.
* Default: 10000
choropleth_shape_limit = <integer>
* Choropleth map shape limit for all browsers.
* Default: 10000
dashboard_html_allow_inline_styles = <boolean>
* Whether or not to allow style attributes from inline HTML elements in
dashboards.
* If "false", style attributes from inline HTML elements in dashboards
will be removed
to prevent potential attacks.
* Default: true
dashboard_html_allow_iframes = <boolean>
* Whether or not to allow iframes from HTML elements in dashboards.
* If "false", iframes from HTML elements in dashboards will be removed
to prevent
potential attacks.
* Default: true
max_view_cache_size = <integer>
* The maximum number of views to cache in the appserver.
* Default: 300
pdfgen_is_available = [0 | 1]
* Specifies whether Integrated PDF Generation is available on this
search
head.
* This is used to bypass an extra call to splunkd.
* Default (on platforms where node is supported): 1
* Default (on platforms where node is not supported): 0
version_label_format = <printf_string>
* Internal configuration.
* Overrides the version reported by the UI to *.splunk.com resources
* Default: %s
auto_refresh_views = [0 | 1]
* Specifies whether the following actions cause the appserver to ask
895
splunkd
to reload views from disk.
* Logging in through Splunk Web
* Switching apps
* Clicking the Splunk logo
* Default: 0
#
# Splunk bar options
#
# Internal config. May change without notice.
# Only takes effect if 'instanceType' is 'cloud'.
#
showProductMenu = <boolean>
* Used to indicate visibility of product menu.
* Default: False.
productMenuUriPrefix = <string>
* The domain product menu links to.
* Required if 'showProductMenu' is set to "true".
productMenuLabel = <string>
* Used to change the text label for product menu.
* Default: 'My Splunk'
showUserMenuProfile = <boolean>
* Used to indicate visibility of 'Profile' link within user menu.
* Default: false
#
# Header options
#
x_frame_options_sameorigin = <boolean>
* adds a X-Frame-Options header set to "SAMEORIGIN" to every response
served
* by cherrypy
* Default: true
#
# Single Sign On (SSO)
#
remoteUser = <http_header_string>
* Remote user HTTP header sent by the authenticating proxy server.
* This header should be set to the authenticated user.
* CAUTION: There is a potential security concern regarding the
treatment of HTTP headers.
* Your proxy provides the selected username as an HTTP header as
specified
above.
896
* If the browser or other HTTP agent were to specify the value of this
header, probably any proxy would overwrite it, or in the case that the
username cannot be determined, refuse to pass along the request or set
it blank.
* However, Splunk Web (specifically, cherrypy) normalizes headers
containing
the dash and the underscore to the same value. For example, USER-NAME
and
USER_NAME are treated as the same in Splunk Web.
* This means that if the browser provides REMOTE-USER and Splunk Web
accepts
REMOTE_USER, theoretically the browser could dictate the username.
* In practice, however, the proxy adds its headers last, which causes
them
to take precedence, making the problem moot.
* See also the 'remoteUserMatchExact' setting which can enforce more
exact
header matching when running with 'appServerPorts' enabled.
* Default: 'REMOTE_USER'
remoteGroups = <http_header_string>
* Remote groups HTTP header name sent by the authenticating proxy
server.
* This value is used by Splunk Web to match against the header name.
* The header value format should be set to comma-separated groups that
the user belongs to.
* Example of header value: Products,Engineering,Quality Assurance
* No default.
remoteGroupsQuoted = <boolean>
* Whether or not the group header value can be comma-separated quoted
entries.
* This setting is considered only when 'remoteGroups' is set.
* If "true", the group header value can be comma-separated quoted
entries.
* NOTE: Entries themselves can contain commas.
* Example of header value with quoted entries:
"Products","North America, Engineering","Quality Assurance"
* Default: false (group entries should be without quotes.)
remoteUserMatchExact = [0 | 1]
* Whether or not to consider dashes and underscores in a remoteUser
header
to be distinct.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* When set to "1", considers dashes and underscores distinct (so
"Remote-User" and "Remote_User" are considered different headers.)
* When set to 0, dashes and underscores are not considered to be
distinct,
to retain compatibility with older versions of Splunk software.
* Set to 1 when you set up SSO with 'appServerPorts' enabled.
897
* Default: 0
remoteGroupsMatchExact = [0 | 1]
* Whether or not to consider dashes and underscores in a remoteGroup
header
to be distinct.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* When set to 1, considers dashes and underscores distinct (so
"Remote-Groups" and "Remote_Groups" are considered different headers)
* When set to 0, dashes and underscores are not considered to be
distinct,
to retain compatibility with older versions of Splunk software.
* Set to 1 when you set up SSO with 'appServerPorts' enabled.
* Default: 0
trustedIP = <ip_address>
* The IP address of the authenticating proxy (trusted IP).
* Splunk Web verifies it is receiving data from the proxy host for all
SSO requests.
* Set to a valid IP address to enable SSO.
* If 'appServerPorts' is set to a non-zero value, this setting can
accept a
richer set of configurations, using the same format as the
'acceptFrom'
setting.
* Default: not set; the normal value is the loopback address
(127.0.0.1).
allowSsoWithoutChangingServerConf = [0 | 1]
* Whether or not to allow SSO without setting the 'trustedIP' setting in
server.conf as well as in web.conf.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* If set to 1, enables web-based SSO without a 'trustedIP' setting
configured
in server.conf.
* Default: 0
898
testing_endpoint = <relative_uri_path>
* The root URI path on which to serve Splunk Web unit and
integration testing resources.
* NOTE: This is a development only setting, do not use in normal
operations.
* Default: /testing
testing_dir = <relative_file_path>
* The path relative to $SPLUNK_HOME that contains the testing
files to be served at endpoint defined by 'testing_endpoint'.
* NOTE: This is a development only setting, do not use in normal
operations.
* Default: share/splunk/testing
ssoAuthFailureRedirect = <scheme>://<URL>
* The redirect URL to use if SSO authentication fails.
* Examples:
* http://www.example.com
* https://www.example.com
* Default: empty string; Splunk Web shows the default unauthorized error
page if SSO authentication fails.
export_timeout = <integer>
* When exporting results, the number of seconds the server waits before
closing the connection with splunkd.
* If you do not set a value for export_timeout, Splunk Web uses the
value
for the 'splunkdConnectionTimeout' setting.
* Set 'export_timeout' to a value greater than 30 in normal operations.
* No default.
#
# cherrypy HTTP server config
#
server.thread_pool = <integer>
* The minimum number of threads the appserver is allowed to maintain.
* Default: 20
server.thread_pool_max = <integer>
* The maximum number of threads the appserver is allowed to maintain.
* Default: -1 (unlimited)
server.thread_pool_min_spare = <integer>
* The minimum number of spare threads the appserver keeps idle.
* Default: 5
server.thread_pool_max_spare = <integer>
* The maximum number of spare threads the appserver keeps idle.
899
* Default: 10
server.socket_host = <ip_address>
* Host values may be any IPv4 or IPv6 address, or any valid hostname.
* The string 'localhost' is a synonym for '127.0.0.1' (or '::1', if your
hosts file prefers IPv6).
* The string '0.0.0.0' is a special IPv4 entry meaning "any active
interface"
(INADDR_ANY), and "::" is the similar IN6ADDR_ANY for IPv6.
* Default (if 'listenOnIPV6' is set to "no": 0.0.0.0
* Default (otherwise): "::"
server.socket_timeout = <integer>
* The timeout, in seconds, for accepted connections between the browser
and
Splunk Web
* Default: 10
max_upload_size = <integer>
* The hard maximum limit, in megabytes, of uploaded files.
* Default: 500
log.access_file = <filename>
* The HTTP access log filename.
* This file is written in the default $SPLUNK_HOME/var/log directory.
* Default: web_access.log
log.access_maxsize = <integer>
* The maximum size, in bytes, that the web_access.log file can be.
* Comment out or set to 0 for unlimited file size.
* Splunk Web rotates the file to web_access.log.0 after the
'log.access_maxsize' is reached.
* See the 'log.access_maxfiles' setting to limit the number of backup
files
created.
* Default: 0 (unlimited size).
log.access_maxfiles = <integer>
* The maximum number of backup files to keep after the web_access.log
file has reached its maximum size.
* CAUTION: Setting this to very high numbers (for example, 10000) can
affect
900
performance during log rotation.
* Default (if 'access_maxsize' is set): 5
log.error_maxsize = <integer>
* The maximum size, in bytes, the web_service.log can be.
* Comment out or set to 0 for unlimited file size.
* Splunk Web rotates the file to web_service.log.0 after the
max file size is reached.
* See 'log.error_maxfiles' to limit the number of backup files created.
* Default: 0 (unlimited file size).
log.error_maxfiles = <integer>
* The maximum number of backup files to keep after the web_service.log
file has reached its maximum size.
* CAUTION: Setting this to very high numbers (for example, 10000) can
affect
performance during log rotations
* Default (if 'access_maxsize' is set): 5
log.screen = <boolean>
* Whether or not runtime output is displayed inside an interactive TTY.
* Default: true
request.show_tracebacks = <boolean>
* Whether or not an exception traceback is displayed to the user on
fatal
exceptions.
* Default: true
engine.autoreload_on = <boolean>
* Whether or not the appserver will auto-restart if it detects a python
file
has changed.
* Default: false
tools.sessions.on = true
* Whether or not user session support is enabled.
* Always set this to true.
tools.sessions.timeout = <integer>
* The number of minutes of inactivity before a user session is
expired.
* The countdown for this setting effectively resets every minute through
browser activity until the 'ui_inactivity_timeout' setting is
reached.
* Use a value of 2 or higher, as a value of 1 causes a race condition
with
the browser refresh, producing unpredictable behavior.
* Low values are not useful except for testing.
* Default: 60
tools.sessions.restart_persist = <boolean>
901
* Whether or not the session cookie is deleted from the browser when the
browser quits.
* If set to "false", then the session cookie is deleted from the
browser
upon the browser quitting.
* If set to "true", then sessions persist across browser restarts,
assuming
the 'tools.sessions.timeout' has not been reached.
* Default: true
tools.sessions.httponly = <boolean>
* Whether or not the session cookie is available to running JavaScript
scripts.
* If set to "true", the session cookie is not available to running
JavaScript
scripts. This improves session security.
* If set to "false", the session cookie is available to running
JavaScript
scripts.
* Default: true
tools.sessions.secure = <boolean>
* Whether or not the browser must transmit session cookies over an HTTPS
connection when Splunk Web is configured to serve requests using HTTPS
(the 'enableSplunkWebSSL' setting is "true".)
* If set to "true" and 'enableSplunkWebSSL' is also "true", then the
browser must transmit the session cookie over HTTPS connections.
This improves session security.
* See the 'enableSplunkWebSSL' setting for details on configuring HTTPS
session support.
* Default: true
tools.sessions.forceSecure = <boolean>
* Whether or not the secure bit of a session cookie that has been sent
over HTTPS is set.
* If a client connects to a proxy server over HTTPS, and the back end
connects to Splunk over HTTP, then setting this to "true" forces the
session cookie being sent back to the client over HTTPS to have the
secure bit set.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Default: false
response.timeout = <integer>
* The timeout, in seconds, to wait for the server to complete a
response.
* Some requests, such as uploading large files, can take a long time.
* Default: 7200 (2 hours).
tools.sessions.storage_type = [file]
tools.sessions.storage_path = <filepath>
* Specifies the session information storage mechanisms.
902
* Set 'tools.sessions.storage_type' and 'tools.sessions.storage_path'
to
use RAM based sessions instead.
* Use an absolute path to store sessions outside of $SPLUNK_HOME.
* Default: storage_type=file, storage_path=var/run/splunk
tools.decode.on = <boolean>
* Whether or not all strings that come into CherryPy controller methods
are
decoded as unicode (assumes UTF-8 encoding).
* CAUTION: Setting this to false will likely break the application, as
all incoming strings are assumed to be unicode.
* Default: true
tools.encode.on = <boolean>
* Whether or not to encode all controller method response strings into
UTF-8 str objects in Python.
* CAUTION: Disabling this will likely cause high byte character encoding
to
fail.
* Default: true
tools.encode.encoding = <codec>
* Forces all outgoing characters to be encoded into UTF-8.
* This setting only takes effect when 'tools.encode.on' is set to
"true".
* By setting this to "utf-8", CherryPy default behavior of observing the
Accept-Charset header is overwritten and forces utf-8 output.
* Only change this if you know a particular browser installation must
receive some other character encoding (Latin-1 iso-8859-1, etc)
* CAUTION: Change this setting at your own risk.
* Default: utf-8
tools.proxy.on = <boolean>
* Used for running Apache as a proxy for Splunk Web, typically for SSO
configurations.
* Search the CherryPy website for "apache proxy" for more
information.
* For Apache 1.x proxies only, set to "true". This configuration tells
CherryPy (the Splunk Web HTTP server) to look for an incoming
X-Forwarded-Host header and to use the value of that header to
construct canonical redirect URLs that include the proper host name.
For
more information, refer to the CherryPy documentation on running
behind an
Apache proxy. This setting is only necessary for Apache 1.1 proxies.
* For all other proxies, you must set to "false".
* Default: false
tools.proxy.base = <scheme>://<URL>
* The proxy base URL in Splunk Web.
* Default: empty string
903
pid_path = <filepath>
* Specifies the path to the Process IDentification (pid) number file.
* Must be set to "var/run/splunk/splunkweb.pid".
* CAUTION: Do not change this parameter.
simple_xml_perf_debug = <boolean>
* Whether or not Simple XML dashboards log performance metrics to the
browser console.
* If set to "true", Simple XML dashboards log some performance metrics
to
the browser console.
* Default: false
job_min_polling_interval = <integer>
* The minimum polling interval, in milliseconds, for search jobs.
* This is the intial wait time for fetching results.
* The poll period increases gradually from the minimum interval
to the maximum interval when search is in a queued or parsing
state (and not a running state) for some time.
* Set this value between 100 and 'job_max_polling_interval'
milliseconds.
* Default: 100
job_max_polling_interval = <integer>
* The maximum polling interval, in milliseconds, for search jobs.
* This is the maximum wait time for fetching results.
* In normal operations, set to 3000.
* Default: 1000
904
non-zero value.
* Separate multiple rules with commas or spaces.
* Each rule can be in one of the following formats:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A Classless Inter-Domain Routing (CIDR) block of addresses
(examples: "10/8", "192.168.1/24", "fe80:1234/32")
3. A DNS name, possibly with a "*" used as a wildcard
(examples: "myhost.example.com", "*.splunk.com")
4. "*", which matches anything
* You can also prefix an entry with '!' to cause the rule to reject the
connection. The input applies rules in order, and uses the first one
that
matches.
For example, "!10.1/16, *" allows connections from everywhere except
the 10.1.*.* network.
* Default: "*" (accept from anywhere)
maxThreads = <integer>
* The number of threads that can be used for active HTTP transactions.
* This setting only takes effect when appServerPorts is set to a
non-zero value.
* This value can be limited to constrain resource usage.
* If set to 0, a limit is automatically picked based on
estimated server capacity.
* If set to a negative number, no limits are enforced.
* Default: 0
maxSockets = <integer>
* The number of simultaneous HTTP connections that Splunk Web can
accept.
* This setting only takes effect when appServerPorts is set to a
non-zero value.
* This value can be limited to constrain resource usage.
* If set to 0, a limit is automatically picked based on estimated
server capacity.
* If set to a negative number, no limits are enforced.
* Default: 0
keepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunk Web HTTP server lets a
keep-alive
connection remain idle before forcibly disconnecting it.
* If this number is less than 7200, it will be set to 7200.
* Default: 7200
busyKeepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunk Web HTTP server lets a
keep-alive
connection remain idle while in a busy state before forcibly
disconnecting it.
* CAUTION: Too large a value that can result in file descriptor
exhaustion
905
due to idling connections.
* If this number is less than 12, it will be set to 12.
* Default: 12
forceHttp10 = auto|never|always
* How the HTTP server deals with HTTP/1.0 support for incoming
clients.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* When set to "always", the REST HTTP server does not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto", it limits HTTP 1.1 features only if the
client sent no User-Agent header, or if the user agent is known
to have bugs in its HTTP/1.1 support.
* When set to "never", it always allows HTTP 1.1, even to
clients it suspects might be buggy.
* Default: auto
allowSslCompression = <boolean>
* Whether or not the server lets clients negotiate SSL-layer data
compression.
* This setting only takes effect when 'appServerPorts' is set
to a non-zero value. When 'appServerPorts' is zero or not set, this
setting
is "true".
* If set to "true", the server lets clients negotiate SSL-layer
data compression.
* The HTTP layer has its own compression layer which is usually
sufficient.
906
* Default (if 'appServerPorts' is set and not 0): false
* Default (if 'appServerPorts' is 0 or not set): true
allowSslRenegotiation = <boolean>
* Whether or not the server lets clients renegotiate SSL connections.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* In the SSL protocol, a client may request renegotiation of the
connection
settings from time to time.
* Setting this to "false" causes the server to reject all renegotiation
attempts, breaking the connection.
* This limits the amount of CPU a single TCP connection can use, but it
can cause connectivity problems especially for long-lived connections.
* Default: true
sendStrictTransportSecurityHeader = <boolean>
* Whether or not the REST interface sends a "Strict-Transport-Security"
header with all responses to requests made over SSL.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* If set to "true", the REST interface sends a
"Strict-Transport-Security"
header with all responses to requests made over SSL.
* This can help avoid a client being tricked later by a
Man-In-The-Middle
attack to accept a non-SSL request.
* This requires a commitment that no non-SSL web hosts will ever be
run on this hostname on any port. For example, if splunkweb is in
default
non-SSL mode this can break the ability of browser to connect to it.
* Enable this setting with caution.
* Default: false
dedicatedIoThreads = <integer>
* The number of dedicated threads to use for HTTP input/output
operations.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* If set to zero, HTTP I/O is performed in the same thread
that accepted the TCP connection.
* If set set to a non-zero value, separate threads run
to handle the HTTP I/O, including SSL encryption.
* Typically this does not need to be changed. For most usage
scenarios using the same the thread offers the best performance.
* Default: 0
replyHeader.<name> = <string>
* Adds a static header to all HTTP responses that this server generates.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* For example, "replyHeader.My-Header = value" causes Splunk Web to
907
include
the response header "My-Header: value" in the reply to every HTTP
request
to it.
* No default.
termsOfServiceDirectory = <directory>
* The directory to look in for a "Terms of Service" document that each
user must accept before logging into Splunk Web.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Inside the directory the TOS should have a filename in the format
"<number>.html"
* <number> is in the range 1 to 18446744073709551615.
* The active TOS is the filename with the larger number. For example,
if
there are two files in the directory named "123.html" and
"456.html", then
456 will be the active TOS version.
* If a user has not accepted the current version of the TOS, they must
accept it the next time they try to log in. The acceptance times
will be recorded inside a "tos.conf" file inside an app called "tos".
* If the "tos" app does not exist, you must create it for acceptance
times to be recorded.
* The TOS file can either be a full HTML document or plain text, but
it must
have the ".html" suffix.
* You do not need to restart Splunk Enterprise when adding files to
the
TOS directory.
* Default: empty string (no TOS)
enableWebDebug = <boolean>
* Whether or not the debug REST endpoints are accessible, for example.,
/debug/**splat.
* Default: false
908
enable_risky_command_check = <boolean>
* Whether or not checks for data-exfiltrating search commands are
enabled.
* default true
909
loginCustomBackgroundImage = <pathToMyFile or myApp:pathToMyFile>
* Customizes the login page background image.
* Supported image files include .jpg, .jpeg or .png with a maximum
file size of 20MB.
* A landscape image is recommended, with a minimum resolution of
1024x640
pixels.
* Using Splunk Web:
* Upload a custom image to a manager page under General Settings.
* The login page background image updates automatically.
* Using the CLI or a text editor:
* Set 'loginBackgroundImageOption' to "custom".
* Place the custom image file in the default or manual location:
* Default destination folder:
$SPLUNK_HOME/etc/apps/search/appserver/static/logincustombg.
* Example: If your image is located at
$SPLUNK_HOME/etc/apps/search/appserver/static/logincustombg/img.png, set
'loginCustomBackgroundImage' to "logincustombg/img.png".
* Manual location:
$SPLUNK_HOME/etc/apps/<myApp>/appserver/static/<pathToMyFile>, and set
'loginCustomBackgroundImage' to
"<myApp:pathToMyFile>".
* The login page background image updates automatically.
* Default: not set (If no custom image is used, the default Splunk
background image displays).
loginFooterText = <footer_text>
* The text to display in the footer of the login page.
* Supports any text, including HTML.
* To display, the parameter 'loginFooterOption' must be set to
"custom".
910
* "default" displays: "<page_title> | Splunk".
* "none" removes the branding on the document title of the login page:
"<page_title>".
* "custom" uses the document title text defined by the
loginDocumentTitleText setting.
* NOTE: This option is made available only to OEM customers
participating in
the Splunk OEM Partner Program and is subject to the relevant terms of
the
Master OEM Agreement. All other customers or partners are prohibited
from
removing or altering any copyright, trademark, and/or other
intellectual
property or proprietary rights notices of Splunk placed on or embedded
in any Splunk materials.
* Default: "default".
loginDocumentTitleText = <document_title_text>
* The text to display in the document title of the login page.
* Text only.
* To display, the parameter 'loginDocumentTitleOption' must be set to
"custom".
loginPasswordHint = <default_password_hint>
* The text to display the password hint at first time login on the
login page.
* Text only.
* Default: "changeme"
appNavReportsLimit = <integer>
* Maximum number of reports to fetch to populate the navigation
drop-down
menu of an app.
* An app must be configured to list reports in its navigation XML
configuration before it can list any reports.
* Set to -1 to display all the available reports in the navigation menu.
* NOTE: Setting to either -1 or a value that is higher than the default
might
result in decreased browser performance due to listing large numbers
of
available reports in the drop-down menu.
* Default: 500
[framework]
# Put App Framework settings here
django_enable = <boolean>
* Specifies whether Django should be enabled or not
* Default: True
* Django will not start unless an app requires it
django_path = <filepath>
* Specifies the root path to the new App Framework files,
911
relative to $SPLUNK_HOME
* Default: etc/apps/framework
django_force_enable = <boolean>
* Specifies whether to force Django to start, even if no app requires it
* Default: False
#
# custom cherrypy endpoints
#
[endpoint:<python_module_name>]
* Registers a custom python CherryPy endpoint.
* The expected file must be located at:
$SPLUNK_HOME/etc/apps/<APP_NAME>/appserver/controllers/<PYTHON_NODULE_NAME>.py
* This module's methods will be exposed at
/custom/<APP_NAME>/<PYTHON_NODULE_NAME>/<METHOD_NAME>
#
# exposed splunkd REST endpoints
#
[expose:<unique_name>]
* Registers a splunkd-based endpoint that should be made available to
the UI
under the "/splunkd" and "/splunkd/__raw" hierarchies.
* The name of the stanza does not matter as long as it begins with
"expose:"
* Each stanza name must be unique.
pattern = <url_pattern>
* The pattern to match under the splunkd /services hierarchy.
* For instance, "a/b/c" would match URIs "/services/a/b/c" and
"/servicesNS/*/*/a/b/c",
* The pattern cannot include leading or trailing slashes.
* Inside the pattern an element of "*" matches a single path element.
For example, "a/*/c" would match "a/b/c" but not "a/1/2/c".
* A path element of "**" matches any number of elements. For example,
"a/**/c" would match both "a/1/c" and "a/1/2/3/c".
* A path element can end with a "*" to match a prefix. For example,
"a/elem-*/b" would match "a/elem-123/c".
methods = <method_lists>
* A comma-separated list of methods to allow from the web browser
(example: "GET,POST,DELETE").
* Default: "GET"
oidEnabled = [0 | 1]
* Whether or not a REST endpoint is capable of taking an embed-id as a
query parameter.
* If set to 1, the endpoint is capable of taking an embed-id
as a query parameter.
912
* This is only needed for some internal splunk endpoints, you probably
should not specify this for app-supplied endpoints
* Default: 0
skipCSRFProtection = [0 | 1]
* Whether or not Splunk Web can safely post to an endpoint without
applying
Cross-Site Request Forgery (CSRF) protection.
* If set to 1, tells Splunk Web that it is safe to post to this
endpoint
without applying CSRF protection.
* This should only be set on the login endpoint (which already contains
sufficient auth credentials to avoid CSRF problems).
* Default: 0
web.conf.example
# Version 7.2.1
#
# This is an example web.conf. Use this file to configure data web
# settings.
#
# To use one or more of these configurations, copy the configuration
block
# into web.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Turn on SSL:
enableSplunkWebSSL = true
# absolute paths may be used here.
privKeyPath = /home/user/certs/myprivatekey.pem
serverCert = /home/user/certs/mycacert.pem
913
# NOTE: non-absolute paths are relative to $SPLUNK_HOME
wmi.conf
The following are the spec and example files for wmi.conf.
wmi.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for configuring
Windows
# Management Instrumentation (WMI) access from Splunk.
#
# There is a wmi.conf in $SPLUNK_HOME\etc\system\default\. To set
custom
# configurations, place a wmi.conf in $SPLUNK_HOME\etc\system\local\.
For
# examples, see wmi.conf.example.
#
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS-----
[settings]
* The settings stanza specifies various runtime parameters.
* The entire stanza and every parameter within it is optional.
* If the stanza is missing, Splunk assumes system defaults.
initial_backoff = <integer>
* How long, in seconds, to wait before retrying the connection to
the WMI provider after the first connection error.
* If connection errors continue, the wait time doubles until it reaches
the integer specified in max_backoff.
* Defaults to 5.
max_backoff = <integer>
914
* The maximum time, in seconds, to attempt to reconnect to the
WMI provider.
* Defaults to 20.
max_retries_at_max_backoff = <integer>
* Once max_backoff is reached, tells Splunk how many times to attempt
to reconnect to the WMI provider.
* Splunk will try to reconnect every max_backoff seconds.
* If reconnection fails after max_retries, give up forever (until
restart).
* Defaults to 2.
checkpoint_sync_interval = <integer>
* The minimum wait time, in seconds, for state data (event log
checkpoint)
to be written to disk.
* Defaults to 2.
INPUT-SPECIFIC SETTINGS-----
[WMI:<name>]
* There are two types of WMI stanzas:
* Event log: for pulling event logs. You must set the
event_log_file attribute.
* WQL: for issuing raw Windows Query Language (WQL) requests. You
must set the wql attribute.
* Do not use both the event_log_file or the wql attributes. Use
one or the other.
interval = <integer>
* How often, in seconds, to poll for new data.
* This attribute is required, and the input will not run if the
attribute is
not present.
* There is no default.
disabled = [0|1]
* Specifies whether the input is enabled or not.
* 1 to disable the input, 0 to enable it.
* Defaults to 0 (enabled).
hostname = <host>
* All results generated by this stanza will appear to have arrived from
the string specified here.
915
* This attribute is optional.
* If it is not present, the input will detect the host automatically.
current_only = [0|1]
* Changes the characteristics and interaction of WMI-based event
collections.
* When current_only is set to 1:
* For event log stanzas, this will only capture events that occur
while Splunk is running.
* For WQL stanzas, event notification queries are expected. The
queried class must support sending events. Failure to supply
the correct event notification query structure will cause
WMI to return a syntax error.
* An example event notification query that watches for process
creation:
* SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE
TargetInstance ISA 'Win32_Process'.
* When current_only is set to 0:
* For event log stanzas, all the events from the checkpoint are
gathered. If there is no checkpoint, all events starting from
the oldest events are retrieved.
* For WQL stanzas, the query is executed and results are retrieved.
The query is a non-notification query.
* For example
* Select * Win32_Process where caption = "explorer.exe"
* Defaults to 0.
use_old_eventlog_api = <bool>
* Whether or not to read Event Log events with the Event Logging API.
* This is an advanced setting. Contact Splunk Support before you change
it.
If set to true, the input uses the Event Logging API (instead of the
Windows Event Log API) to read from the Event Log on Windows Server
2008, Windows Vista, and later installations.
* Defaults to false (Use the API that is specific to the OS.)
use_threads = <integer>
* Specifies the number of threads, in addition to the default writer
thread, that can be created to filter events with the
blacklist/whitelist regular expression.
The maximum number of threads is 15.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to 0
thread_wait_time_msec = <integer>
* The interval, in milliseconds, between attempts to re-read Event Log
files when a read error occurs.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to 5000
916
suppress_checkpoint = <bool>
* Whether or not the Event Log strictly follows the
'checkpointInterval' setting when it saves a checkpoint.
By default, the Event Log input saves a checkpoint from between zero
and 'checkpointInterval' seconds, depending on incoming event volume.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to false
suppress_sourcename = <bool>
* Whether or not to exclude the 'sourcename' field from events.
When set to true, the input excludes the 'sourcename' field from
events and thruput performance (the number of events processed per
second) improves.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to false
suppress_keywords = <bool>
* Whether or not to exclude the 'keywords' field from events.
When set to true, the input excludes the 'keywords' field from events
and thruput performance (the number of events processed per second)
improves.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to false
suppress_type = <bool>
* Whether or not to exclude the 'type' field from events.
When set to true, the input excludes the 'type' field from events and
thruput performance (the number of events processed per second)
improves.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to false
suppress_task = <bool>
* Whether or not to exclude the 'task' field from events.
When set to true, the input excludes the 'task' field from events and
thruput performance (the number of events processed per second)
improves.
* This is an advanced setting. Contact Splunk Support before you change
it.
* Defaults to false
suppress_opcode = <bool>
* Whether or not to exclude the 'opcode' field from events.
When set to true, the input excludes the 'opcode' field from events
and thruput performance (the number of events processed per second)
improves.
* This is an advanced setting. Contact Splunk Support before you change
it.
917
* Defaults to false
batch_size = <integer>
* Number of events to fetch on each query.
* Defaults to 10.
checkpointInterval = <integer>
* How often, in seconds, that the Windows Event Log input saves a
checkpoint.
* Checkpoints store the eventID of acquired events. This lets the input
continue monitoring at the correct event after a shutdown or outage.
* The default value is 0.
index = <string>
* Specifies the index that this input should send the data to.
* This attribute is optional.
* When defined, "index=" is automatically prepended to <string>.
* Defaults to "index=main" (or whatever you have set as your default
index).
disable_hostname_normalization = [0|1]
* If set to true, hostname normalization is disabled
* If absent or set to false, the hostname for 'localhost' will be
converted
to %COMPUTERNAME%.
* 'localhost' refers to the following list of strings: localhost,
127.0.0.1,
::1, the name of the DNS domain for the local computer, the fully
qualified DNS name, the NetBIOS name, the DNS host name of the local
computer
WQL-specific attributes:
wql = <string>
918
* Tells Splunk to expect data from a WMI provider for this stanza, and
specifies the WQL query you want Splunk to make to gather that data.
* Use this if you are not using the event_log_file attribute.
* Ensure that your WQL queries are syntactically and structurally
correct
when using this option.
* For example,
SELECT * FROM Win32_PerfFormattedData_PerfProc_Process WHERE Name =
"splunkd".
* If you wish to use event notification queries, you must also set the
"current_only" attribute to 1 within the stanza, and your query must
be
appropriately structured for event notification (meaning it must
contain
one or more of the GROUP, WITHIN or HAVING clauses.)
* For example,
SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE TargetInstance
ISA
'Win32_Process'
* There is no default.
namespace = <string>
* The namespace where the WMI provider resides.
* The namespace spec can either be relative (root\cimv2) or absolute
(\\server\root\cimv2).
* If the server attribute is present, you cannot specify an absolute
namespace.
* Defaults to root\cimv2.
wmi.conf.example
# Version 7.2.1
#
# This is an example wmi.conf. These settings are used to control
inputs
# from WMI providers. Refer to wmi.conf.spec and the documentation at
# splunk.com for more information about this file.
#
# To use one or more of these configurations, copy the configuration
block
# into wmi.conf in $SPLUNK_HOME\etc\system\local\. You must restart
Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
919
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[settings]
initial_backoff = 5
max_backoff = 20
max_retries_at_max_backoff = 2
checkpoint_sync_interval = 2
# Pull events from the Application, System and Security event logs from
the
# local system every 10 seconds. Store the events in the "wmi_eventlog"
# Splunk index.
[WMI:LocalApplication]
interval = 10
event_log_file = Application
disabled = 0
index = wmi_eventlog
[WMI:LocalSystem]
interval = 10
event_log_file = System
disabled = 0
index = wmi_eventlog
[WMI:LocalSecurity]
interval = 10
event_log_file = Security
disabled = 0
index = wmi_eventlog
# Gather disk and memory performance metrics from the local system every
# second. Store event in the "wmi_perfmon" Splunk index.
[WMI:LocalPhysicalDisk]
interval = 1
wql = select Name, DiskBytesPerSec, PercentDiskReadTime,
PercentDiskWriteTime, PercentDiskTime from
Win32_PerfFormattedData_PerfDisk_PhysicalDisk
disabled = 0
index = wmi_perfmon
[WMI:LocalMainMemory]
interval = 10
wql = select CommittedBytes, AvailableBytes,
PercentCommittedBytesInUse, Caption from
Win32_PerfFormattedData_PerfOS_Memory
disabled = 0
index = wmi_perfmon
920
# Collect all process-related performance metrics for the splunkd
process,
# every second. Store those events in the "wmi_perfmon" index.
[WMI:LocalSplunkdProcess]
interval = 1
wql = select * from Win32_PerfFormattedData_PerfProc_Process where Name
= "splunkd"
disabled = 0
index = wmi_perfmon
# Listen from three event log channels, capturing log events that occur
only
# while Splunk is running, every 10 seconds. Gather data from three
remote
# servers srv1, srv2 and srv3.
[WMI:TailApplicationLogs]
interval = 10
event_log_file = Application, Security, System
server = srv1, srv2, srv3
disabled = 0
current_only = 1
batch_size = 10
[WMI:ProcessCreation]
interval = 1
server = remote-machine
wql = select * from __InstanceCreationEvent within 1 where
TargetInstance isa 'Win32_Process'
disabled = 0
current_only = 1
batch_size = 10
[WMI:USBChanges]
interval = 1
wql = select * from __InstanceOperationEvent within 1 where
TargetInstance ISA 'Win32_PnPEntity' and
TargetInstance.Description='USB Mass Storage Device'
disabled = 0
current_only = 1
batch_size = 10
921
workflow_actions.conf
The following are the spec and example files for workflow_actions.conf.
workflow_actions.conf.spec
# Version 7.2.1
#
# This file contains possible attribute/value pairs for configuring
workflow
# actions in Splunk.
#
# There is a workflow_actions.conf in
$SPLUNK_HOME/etc/apps/search/default/.
# To set custom configurations, place a workflow_actions.conf in either
# $SPLUNK_HOME/etc/system/local/ or add a workflow_actions.conf file to
your
# app's local/ directory. For examples, see
workflow_actions.conf.example.
# You must restart Splunk to enable configurations, unless editing them
# through the Splunk manager.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
############################################################################
922
# General required settings:
# These apply to all workflow action types.
############################################################################
type = <string>
* The type of the workflow action.
* If not set, Splunk skips this workflow action.
label = <string>
* The label to display in the workflow action menu.
* If not set, Splunk skips this workflow action.
############################################################################
# General optional settings:
# These settings are not required but are available for all workflow
# actions.
############################################################################
display_location = <string>
* Dictates whether to display the workflow action in the event menu, the
field menus or in both locations.
* Accepts field_menu, event_menu, or both.
* Defaults to both.
923
* Defaults to False
$@field_name$
* Allows for the name of the current field being clicked on to be used
in a
field action.
* Useful when constructing searches or links that apply to all fields.
* NOT AVAILABLE FOR EVENT MENUS
$@field_value$
* Allows for the value of the current field being clicked on to be used
in a
field action.
* Useful when constructing searches or links that apply to all fields.
* NOT AVAILABLE FOR EVENT MENUS
$@sid$
* The sid of the current search job.
$@offset$
* The offset of the event being clicked on in the list of search
events.
$@namespace$
* The name of the application from which the search was run.
$@latest_time$
* The latest time the event occurred. This is used to disambiguate
924
similar
events from one another. It is not often available for all fields.
############################################################################
# Link type:
# Allows for the construction of GET and POST requests via links to
external
# resources.
############################################################################
link.uri = <string>
* The URI for the resource to link to.
* Accepts field values in the form $<field name>$, (e.g $_raw$).
* All inserted values are URI encoded.
* Required
link.target = <string>
* Determines if clicking the link opens a new window, or redirects the
current window to the resource defined in link.uri.
* Accepts: "blank" (opens a new window), "self" (opens in the same
window)
* Defaults to "blank"
link.method = <string>
* Determines if clicking the link should generate a GET request or a
POST
request to the resource defined in link.uri.
* Accepts: "get" or "post".
* Defaults to "get".
link.postargs.<int>.<key/value> = <value>
* Only available when link.method = post.
* Defined as a list of key / value pairs like such that foo=bar becomes:
link.postargs.1.key = "foo"
link.postargs.1.value = "bar"
* Allows for a conf compatible method of defining multiple identical
keys (e.g.):
link.postargs.1.key = "foo"
link.postargs.1.value = "bar"
link.postargs.2.key = "foo"
link.postargs.2.value = "boo"
...
* All values are html form encoded appropriately.
############################################################################
925
# Search type:
# Allows for the construction of a new search to run in a specified
view.
############################################################################
search.search_string = <string>
* The search string to construct.
* Accepts field values in the form $<field name>$, (e.g. $_raw$).
* Does NOT attempt to determine if the inserted field values may break
quoting or other search language escaping.
* Required
search.app = <string>
* The name of the Splunk application in which to perform the
constructed
search.
* By default this is set to the current app.
search.view = <string>
* The name of the view in which to preform the constructed search.
* By default this is set to the current view.
search.target = <string>
* Accepts: blank, self.
* Works in the same way as link.target. See link.target for more info.
search.earliest = <time>
* Accepts absolute and Splunk relative times (e.g. -10h).
* Determines the earliest time to search from.
search.latest = <time>
* Accepts absolute and Splunk relative times (e.g. -10h).
* Determines the latest time to search to.
search.preserve_timerange = <boolean>
* Ignored if either the search.earliest or search.latest values are
set.
* When true, the time range from the original search which produced the
events list will be used.
* Defaults to false.
workflow_actions.conf.example
# Version 7.2.1
#
# This is an example workflow_actions.conf. These settings are used to
# create workflow actions accessible in an event viewer. Refer to
# workflow_actions.conf.spec and the documentation at splunk.com for
more
926
# information about this file.
#
# To use one or more of these configurations, copy the configuration
block
# into workflow_actions.conf in $SPLUNK_HOME/etc/system/local/, or into
your
# application's local/ folder. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please
see
# the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# These are the default workflow actions and make extensive use of the
# special parameters: $@namespace$, $@sid$, etc.
[show_source]
type=link
fields = _cd, source, host, index
display_location = event_menu
label = Show Source
link.uri =
/app/$@namespace$/show_source?sid=$@sid$&offset=$@offset$&latest_time=$@latest_time$
[ifx]
type = link
display_location = event_menu
label = Extract Fields
link.uri = /ifx?sid=$@sid$&offset=$@offset$&namespace=$@namespace$
[etb]
type = link
display_location = event_menu
label = Build Eventtype
link.uri = /etb?sid=$@sid$&offset=$@offset$&namespace=$@namespace$
[whois]
display_location = field_menu
fields = clientip
label = Whois: $clientip$
link.method = get
link.target = blank
link.uri = http://ws.arin.net/whois/?queryinput=$clientip$
type = link
927
every
# field value in Google.
[Google]
display_location = field_menu
fields = *
label = Google $@field_name$
link.method = get
link.uri = http://www.google.com/search?q=$@field_value$
type = link
# This is an example post link that will send its field name and field
value
# to a fictional bug tracking system.
workload_pools.conf
The following are the spec and example files for workload_pools.conf.
928
workload_pools.conf.spec
# Version 7.2.1
#
OVERVIEW
# This file contains descriptions of the settings that you can use to
# configure workloads for splunk.
#
# There is a workload_pools.conf file in the
$SPLUNK_HOME/etc/system/default/ directory.
# Never change or copy the configuration files in the default
directory.
# The files in the default directory must remain intact and in their
original
# location.
#
# To set custom configurations, create a new file with the name
workload_pools.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific
settings
# that you want to customize to the local configuration file.
# For examples, see workload_pools.conf.example. You may need to
restart the Splunk instance
# to enable configuration changes.
#
# To learn more about configuration files (including file precedence)
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
929
# file takes precedence.
# * If a setting is defined at both the global level and in a specific
# stanza, the value in the specific stanza takes precedence.
#
# CAUTION: Do not alter the settings in the workload_pools.conf file
unless you know
# what you are doing. Improperly configured worloads might result
in
# splunkd crashes, memory overuse, or both.
[general]
enabled = <bool>
* Specifies whether workload management has been enabled on the system
or not.
* This setting only applies to the default stanza as a global setting.
* Default: false
default_pool = <string>
* Specifies the default workload pool to be used at runtime for search
workloads.
* Admin users could specify workload pools associated with roles. If no
workload
pool can be found, then we fall back to this default_pool that is
defined in
the general stanza in workload.conf.
* This setting is only applicable when workload management has been
enabled in
the system. If workload management has been enabled, this is a
mandatory setting.
ingest_pool = <string>
* Specifies the workload pool for the splunkd process that controls
ingestion
and other actions in the Splunk deployment.
* Use this setting to guarantee a minimum lower-bound for resources for
tasks
controlled and managed by splunkd.
* This setting is only applicable when workload management has been
enabled in
the system. If workload management has been enabled, this is a
mandatory setting.
workload_pool_base_dir_name = <string>
* Specifies the base controller directory name for Splunk cgroups on
Linux to be used by a Splunk deployment.
* Workload pools created from the workload management page are all
created relative
to this base directory.
* This setting is only applicable when workload management has been
930
enabled in
the system. If workload management has been enabled, this is a
mandatory setting.
* Default: splunk
[workload_pool:<pool_name>]
cpu_weight = <number>
* Specifies the cpu weight to be used by this workload pool.
* This is effectively a relative ratio or fraction of the total weights
assigned
across all the workload pools.
* Note that this is not a percentage and instead a relative weight as a
fraction
of the total weight calculated by summing all workload pool weights.
* This is a mandatory parameter for the creation of a workload pool and
only
allows positive integral values.
* Default is unset
mem_weight = <number>
* Specifies the memory weight to be used by this workload pool.
* This is effectively a ratio or fraction of the total weights assigned
across all the workload pools.
* Note that this is not a percentage and instead a relative weight as a
fraction
of the total weight calculated by summing all workload pool weights.
* This is a mandatory parameter for the creation of a workload pool and
only
allows positive integral values.
* Default is unset
workload_pools.conf.example
# Version 7.2.1
# CAUTION: Do not alter the settings in workload_pools.conf unless you
know what you are doing.
# Improperly configured workloads may result in splunkd crashes and/or
memory overuse.
[general]
enabled = false
default_pool = pool_1
ingest_pool = pool_2
workload_pool_base_dir_name = splunk
[workload_pool:pool_1]
931
cpu_weight = 40
mem_weight = 40
[workload_pool:pool_2]
cpu_weight = 30
mem_weight = 30
[workload_pool:pool_3]
cpu_weight = 20
mem_weight = 20
[workload_pool:pool_4]
cpu_weight = 10
mem_weight = 10
workload_rules.conf
The following are the spec and example files for workload_rules.conf.
workload_rules.conf.spec
# Version 7.2.1
#
OVERVIEW
# This file contains descriptions of the settings that you can use to
# configure workloads classification rules for splunk.
#
# There is a workload_rules.conf file in the
$SPLUNK_HOME/etc/system/default/ directory.
# Never change or copy the configuration files in the default
directory.
# The files in the default directory must remain intact and in their
original
# location.
#
# To set custom configurations, create a new file with the name
workload_rules.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific
settings
# that you want to customize to the local configuration file.
# For examples, see workload_rules.conf.example. You do not need to
restart the Splunk instance
932
# to enable workload_rules.conf configuration changes.
#
# To learn more about configuration files (including file precedence)
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
[workload_rule:<rule_name>]
predicate = <string>
* Specifies the predicate of this workload classification rule. The
format is <type>=<value>.
The valid <type> are "app", and "role". The <value> is the the exact
value of the <type>.
* For example, for "app" type, the value is the name of "app", say
"search". For "role" type, the value
can be "admin".
* Required.
workload_pool = <string>
* Specifies the name of the workload pool, for example "pool1".
* The pool name specified must be defined earlier through
[workload_pool:<pool_name>] stanza in
workload_pools.conf.
933
* Required
[workload_rules_order]
rules = <string>
* List of all workload classification rules.
* The format of the "string" is comma separated items,
"rule1,rule2,...".
* The rules listed are defined in [workload_rule:<rule_name>] stanza.
* The order of the rule name in the list determines the priorities of
that rule.
For example, in "rule1,rule2", rule1 has higher priority than rule2.
* The default value for this property is empty, meaning there is no rule
defined.
workload_rules.conf.example
[workload_rules_order]
rules = my_analyst_rule,my_app_rule
[workload_rule:my_app_rule]
predicate = app=search
workload_pool = my_app_pool
[workload_rule:my_analyst_rule]
predicate = role=analyst
workload_pool = my_analyst_pool
934