Splunk Enterprise Security Administrator Study Notes 1631861326
Splunk Enterprise Security Administrator Study Notes 1631861326
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_SecPosDB.png
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_IncRevDB.png
On the Incident Review page, notable events are listed in reverse date order. In this example, we have one critical event
and 77 high events. Filter only by critical events, then click submit:
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_IncRevUrgency.png
From here, indicate to other analysts that this notable event is currently being analysed. Click the checkbox to select the
event of interest. Alternatively, select multiple events, followed by Edit all X matching events to change their status
in bulk. In this case, choose Edit Selected to update the single event. Change the Status to In Progress and click the
Assign To Me link to assign your own username as the Owner. Add a comment for context if necessary, then click Save
changes to return to the Incident Review dashboard.
Clicking the > arrow to the left of the event will expand details to include the following:
• Description
• Additional Fields
◦ Configured from ES → Configure → Incident Management → Incident Review Settings
◦ Configured in SA-ThreatIntelligence/local/log_review.conf
• Related Investigations
• Correlation Search
• History
• Contributing Events
• Original Event
• Adaptive Responses
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_IncRevDBsorted.png
In this case, the Destination field has a Risk Score of 100 associated with the asset, as shown above in orange. This will
have contributed to the urgency rating of this event as critical.
The diagram below shows how the assigned priority of an identity or asset, combined with the assigned severity of an
event, contributes to the calculated urgency of the event in the Incident Review dashboard
Source: https://docs.splunk.com/File:ES40_Notable_Urgency_table2.png
For example, a Destination IP Address corresponding to an asset with a risk rating of 100 (critical priority), combined
with an event severity of critical, has resulted in the urgency of “Critical”. If the assigned severity of the event was low
or unknown, the resulting event urgency would have been “High” instead.
Returning to the Incident Review display: Each of the fields has an Action dropdown that allows drilling down into a
variety of dashboards for further contextual details. For example, the Action item next to Destination IP Address
provides a link to the Asset Investigator:
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_IncRevFieldAct.png
The Asset Investigator displays details for a single host with groups of threat categories, such as All Authentication,
IDS Attacks or Malware Attacks. Each row is presented as a swimlane that provides a heat map for collections of data
points known as candlesticks. Brighter shades indicate a larger number of events within the category for that time period.
Source: https://docs.splunk.com/images/8/83/ES51_UseCase_Malware_AssetInvest.png
Use the time sliders to focus on a specific time range:
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_AInvEvent.png
You can also drag your mouse over multiple swimlanes and timeframes to select multiple candlesticks, which
will extrapolate common fields and a listing of field values into the Event Panel.
Source: https://www.youtube.com/watch?v=6XmiLxKvg6k
In the Event Panel, click the magnifying glass icon for Go to Search to drill down and search on the selected events:
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_RawSearch1.png
The New Search dashboard shows the App Context of Enterprise Security, which allows ES-specific field values,
aliases, etc. to be applied to raw log events. The drilldown search uses the Malware_Attacks dataset object within the
Malware datamodel, searching on the desired Destination IP Address of dest as an alias of the dest_ip field in the
Malware data model. From a performance perspective, be aware that ES does NOT use accelerated data models for
drilldown searches, so specifying a smaller time range will provide faster results.
With the desired results available in search, start your investigation with common key fields, such as source and
sourcetype. This will provide a context for what type of events are associated with the observance of malicious activity.
Extend upon this by investigating network-related fields such as src_ip and dest_ip to understand the flow of traffic.
Finally, investigate host-specific values such as uri and client_app to determine what kind of requests were being made,
and whether these reflect normal user behaviour.
Recall that a candlestick only represents a small portion of events within the timerange you selected in the Asset
Investigator. Expand the time range from the Date time range picker, or from the Event Time.
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_LotsOfEvents.png
Optionally, apply tabular formatting by appending | table dest src url, or with the fields you desire.
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_SortedEvents.png
In this example, there are three Shockwave Flash (SWF) files and three executables visible from the sourcetype of
cisco:sourcefire. A Shockwave Flash vulnerability likely acted as the point of entry, which then resulted in generation or
download of additional malicious executables. This sourcetype shows network activity, but we should drill down on the
src field to observe other sourcetype activity from a host of interest. Tabling the output by URL and file name, then
sorting the results can verify this suspicion.
Following a standard incident response procedure, the malicious host is identified, and the containment phase follows to
quarantine or isolate as appropriate.
Next, drill down into the uri field to find other hosts potentially infected by the same malware, extending the search as
necessary to ensure all relevant hosts are identified. Tabulating this output by | table src url file_name allows
the data to be more readily exported for reporting, as seen below:
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_SuspiciousTableExport.png
From here, update the notable event created earlier. Select the notable event and click Add Selected to Investigation.
Details of Splunk Investigations are covered later in the course objectives. Place the notable event in Pending until the
investigation is concluded, then mark the event as Closed with appropriate notes to summarise Containment, Eradication,
Response, and Lessons Learned.
NB: These incident response workflows are not an explicit part of Splunk Enterprise Security, but should be documented
to better assist preparation for future incident response.
Review the second use case on your own for identifying initial malware infection using DNS data. Prerequisites include
adding asset & identity data into Splunk ES, normalising anti-malware logs into the Malware CIM data model,
normalising DNS lookup data to the Network Resolution CIM data model, and normalising web activity to the Proxy
object of the Web CIM data model. For the exam, be prepared for questions on CIM, data models & normalisation.
If DNS queries are not collected by a third party sensor, they can be collected by the Splunk Stream app. Details of
mapping source types to Data Models through Field Aliases and the Add-on Builder are discussed in the course
objectives section of this document. The incident response process should start with preparation and identification,
followed by containment, eradication, response, and lessons learned.
https://docs.splunk.com/Documentation/ES/6.6.0/Usecases/Overview
https://docs.splunk.com/Documentation/ES/6.6.0/User/Howurgencyisassigned
Source: https://splunkproducttours.herokuapp.com/tour/splunk-enterprise-security-es
More than 100 dashboards are available, supporting risk analysis, intelligence sources, asset & identity monitoring, as
well as domain dashboards that provide an overview of access, endpoints, network, and asset & identity information.
Audit dashboards monitor the Splunk ES environment.
This section is intentionally short. By learning, practising and reviewing the following sections, you will gain a more
holistic overview of ES features and concepts.
https://docs.splunk.com/Documentation/ES/6.6.0/User/Overview
Splunk Enterprise Security Guided Product Tour | Thanks
2.0 Monitoring and Investigation (10%)
2.1 Security Posture
The Security Posture dashboard provides an overview appropriate for a SOC wallboard, showing all events and
trends over 24 hours, as well as real-time event information & updates.
Security posture dashboard panels include:
• Key [Security] Indicators: Count of notable events over 24 hours. Indicators are customisable, with
default indicators as follows:
◦ Access Notables
◦ Endpoint Notables
◦ Network Notables
◦ Identity Notables
◦ Audit Notables
◦ Threat Notables
◦ UBA [User Behaviour Analytics] Notables (if UEBA is available)
• Notable Events by Urgency: Based on asset priority and severity assigned tot he correlation search.
Supports drilldown into Incident Review for associated events over the last 24 hours.
• Notable Events Over Time: Timeline of notable events by domain that can drill down into Incident
Review for the selected security domain and timeframe.
• Top Notable Events: Displays rule names, count and sparkline of activity over time. Drilldown opens
the Incident Review dashboard scoped to the selected rule.
• Top Notable Event Sources: Displays the top 10 notable events by src, including total count, count
per correlation & domain, and sparkline. Drilldown opens Incident Review scoped to the selected src.
Source: https://docs.splunk.com/File:ES51_UseCase_Malware_SecPosDB.png
https://docs.splunk.com/Documentation/ES/6.6.0/User/SecurityPosturedashboard
2.2 Incident Review
Correlation searches are designed and developed to detect suspicious patterns and create notable events.
The Incident Review dashboard then displays notable events in descending date order, with their current status. Unlike
the Security Posture dashboard, which provides an overview of notable events, the Incident Review dashboard provides
individual details of notable events. Notable events can be filtered or sorted by field, and each event may represent one or
more incidents detected by a correlation search.
Analysts use this dashboard to examine, assign, or triage alerts, which may lead to an Investigation.
By default, notable event statuses include the following:
• Unassigned
• New [default]
• In Progress
• Pending
• Resolved
• Closed
Incident Review progresses through stages of:
1. Assignment to an analyst
2. Updating the status of the event from “New” to “In Progress”
3. Performing investigative actions for triage, which might include adaptive response actions
4. Adding appropriate comments as triage continues
5. Optionally, assigning the notable event to an Investigation for more thorough analysis
6. Updating the notable event status to “Resolved”
7. Peer Review to validate the resolution before updating the notable event status to “Closed”
Two of the fields not mentioned in this example are Unassigned and Pending. The Unassigned status indicates that the
current analyst is no longer working on the event, and that another analyst can pick up where they left off. The Pending
state indicates that the analyst is waiting on a third party such as a vendor, a client, or a change approval.
In cases like the above, it may be necessary to change the configuration of Incident Handling to add additional Notable
Event Statuses. Examples might include “Pending Change”, “In Progress – Team X”, “Resolved – False Positive” or
“Resolved – Mitigated”. This allows the dashboard to provide a clear picture of each incident state, while improving
reporting and use cases. For example, a high number of False Positives for a specific notable event indicates the need to
improve correlation searches for a specific use case.
The Security Domain on the Incident Review page aligns with the key indicators from the Security Posture
dashboard. Note that if User & Endpoint Behavioural Analytics (UEBA) is not in use, this option will not be
available. In a later section on dashboards, you’ll see how the access, endpoint, network and identity security
domains are presented visually via the Security Domains menu. Threats are more nuanced, as they can pertain
to malware on endpoints, network intrusions or vulnerabilities; or to threat intelligence, which falls under the
Security Intelligence menu. Audit events are observable in separate dashboards under the Audit menu.
Source: https://www.youtube.com/watch?v=6XmiLxKvg6k
https://docs.splunk.com/Documentation/ES/6.6.0/User/IncidentReviewdashboard
Source: https://dev.splunk.com/enterprise/static/SES-460-notable-compressor-38128bbe320a63023373269dfddef322.png
Notable event review statuses can be configured in reviewstatuses.conf within SA-ThreatIntelligence.
Risk Event Notables include two fields:
• Risk Events: Events that created the notable alert
• Aggregated Risk Score: Sum of scores associated with each of the contributing events, such that three events
with risk scores of 10, 20 and 40 would have an aggregated risk score of 70.
Click the value in the Risk Events field for the notable of interest. This opens a window with two panels.
The top panel displays a timeline of contributing risk events, while the bottom panel includes a table with detailed event
information
Sort the contributing risk events in the table by Time, Risk Rule or Score.
Expand the notable in the Contributing Risk Events table to analyse the following fields:
• Risk Object
• Source
• Risk Score
• Risk Message
• Saved Search Description
• Threat Object
• Threat Object Type
Click View Raw Event for information on the contributing events that triggered the risk event.
Correlate risk events with dates and risk scores in the timeline visualisation to identify threats. The timeline may also use
colour codes to indicate severity, aligning with colours used in the contributing risk event table.
Up to 100 Contributing Risk Events can be viewed at a time. If more than 100 contributing events exist, the event count
displays as 100+ on the header, with a link to the search page to display all risk events.
Hover over the colour coded icons in the timeline visualisation for more risk event information, including:
• Risk Score
• Event Name
• Description
• Time
• MITRE Tactic
• MITRE Technique
Clicking a notable in the timeline highlights the associated row in the Contributing Risk Events table.
Identify the Risk Object Type as User, System, Network Artifact or Other via the timeline header.
Other components in the Incident Review for a given alert include:
• History: View recent activity for the notable event to see comments and status changes
• Related Investigations
• Correlation search: Understand why the notable event was created or generated
• Contributing events: What source events caused the notable to be created
• Asset and Identity Risk Scores: Drill down on risk analytics
• Adaptive Response: Review automatically completed actions for the event with drill down for more details, and
Adaptive Response Invocations for the associated raw audit events
• Next Steps: Defines what triage actions should be taken next
• Create Short ID: Found under Event Details for sharing with other analysts or to reference this notable event
Source: https://www.domaintools.com/assets/blog_image/how-we-made-investigations-in-splunk-powerful-effective-
image-4.jpg
Investigations, Correlation Searches and Adaptive Response will be addressed in detail in a later section
Sequenced events from sequence templates are also listed in the selected notable alert details, allowing drill down into
each of the events in the sequence that contributed to the notable event being generated.
The focus here is on managing notables rather than investigating notables, but further details on notable investigation can
be found in the first link below:
https://docs.splunk.com/Documentation/ES/6.6.0/User/Triagenotableevents
https://dev.splunk.com/enterprise/docs/devtools/enterprisesecurity/notableeventsplunkes/
Source: https://www.youtube.com/watch?v=KoIY-_2ItSc
Manually added artifacts are automatically selected so that you can click Explore and continue investigating with the new
artifacts. Hovering over the artifacts and selecting the information icon (i) will show the corresponding labels. Labels
can also be seen under the summary tab.
If a workbench panel has drilldown enabled, you can add field values as artifacts from the panel:
1. Select artifacts on the workbench and click Explore
2. In a panel, click a field value and complete the pre-populated Add Artifact dialog box
3. Optionally add a description and labels for the artifact
4. Optionally click Expand Artifacts to look up asset and identity information in asset or identity lookups and add
correlated artifacts to the investigation scope
5. Click Add to Scope to add the desired artifact to the investigation scope
New panels, tabs and profiles can be added to the workbench to simplify investigations.
1. Open an Investigation and click Explore to explore artifacts
2. Click Add Content
3. Click Load profile or Add single tab, make a selection, and save
4. New panels are created via the ES Menu Bar
1. Configure → Content → Content Management
2. For a Prebuilt panel:
1. Create New Content → Panel
2. Type a Prebuilt panel ID, select a Destination App, Type prebuilt panel XML, and Save
3. Alternatively, convert a dashboard panel to a prebuilt panel
3. For a standard (Workbench) panel
1. Create New Content → Workbench Panel
2. Select the panel from the list
3. Optionally add a Label or Description
4. Add a token to replace the token in the panel search
5. Select the artifact Type, Apply, Save, and Save again
In addition to the workbench view, there is the timeline view:
Source: https://www.youtube.com/watch?v=KoIY-_2ItSc
After adding an event to the investigation, individual field values from the raw event can be added as artifacts:
1. View the Timeline of the investigation and locate the event in the Slide View
2. Click Details to view a table of fields and values in the event
3. Click the value to add to the investigation scope and complete the Add Artifact dialog box
4. Optionally add a description and labels for the artifact
5. Optionally click Expand Artifacts to look up asset or identity information and add correlated artifacts to the
investigation scope
6. Click Add to Scope to add the raw event field values to the investigation scope
Finally, there is a Summary view, which provides an overview of notable events and artifacts linked to the
investigation, as well as their respective owners and creators. The list of contributors remains visible in this view, with
the option to add additional contributors as required.
Source: https://www.youtube.com/watch?v=KoIY-_2ItSc
For any of these views, there are also options in the bottom right-corner to:
• View a live feed of relevant notable events
• Perform a Quick Search
• Add an investigation artifact
• View or add Notes, or add a Timeline Note
• View Action History
Notes are for standard work performed on the workbench, such as observations or additional information. In contrast,
timeline notes are for inline comments that help describe the timeline of events, visible at the time you specify.
Security domain dashboards monitor events and status of important security domains:
• Access: Authentication and access-related data, such as login attempts, access control events, and default account
activity
• Endpoint: Malware infections, patch history, system configurations, and time synchronisation
• Network: Traffic data from firewalls, routers, IDPS, vulnerability scanners, proxy servers and hosts
• Identity: Data from asset and identity lists, as well as types of sessions in use
Security Intelligence supports correlation searches and alerts, including contributing risks, events and anomalous or
notable behaviour. Security Domains provides environmental context better suited to investigations, and may
be more closely associated with governance, compliance, audits and security maturity.
As this objective is to explore dashboards, you should interact with each of the dashboards, and think about when each
dashboard might be used in a variety of scenarios. You are not expected to memorise individual panels or their underlying
searches, but should be able to associate individual dashboards with their corresponding security domain.
https://docs.splunk.com/Documentation/ES/6.6.0/User/Domaindashboards
https://docs.splunk.com/Documentation/ES/6.6.0/User/SecurityPosturedashboard
If you use the btool command line tool to verify settings, use it only after you move the
SplunkEnterpriseSecuritySuite from etc/apps to etc/shcluster/appssearches directory. If
SplunkEnterpriseSecuritySuite remains in the etc/apps directory. btool checks may cause errors because add-ons
like SA-Utils that contain .spec files are not installed on the deployer.
The DA-ESS and SA apps are automatically extracted and deployed throughout the search head cluster.
7. Use the deployer to deploy Enterprise Security to the cluster members. From the deployer, run this command:
splunk apply shcluster-bundle --answer-yes -target <URI>:<management_port> -auth
<username>:<password>
Perform the following for standard command line installation of Splunk ES
1. Download Splunk ES and place it on the search head.
2. Start the installation process on the search head. Install with the ./splunk install app <filename> command or
perform a REST call to start the installation from the server command line. E.g.
DO NOT use ./splunk install app when upgrading the Splunk Enterprise Security app.
You can upgrade Splunk ES on the CLI using the same process as other Splunk apps or add-ons. After the app is
installed, run the essinstall command with the appropriate flags as shown in the next step.
3. On the search head, use the Splunk software command line to run the following command:
splunk search '| essinstall' -auth admin:password
You can also run this search command from Splunk Web:
| essinstall
When installing from the command line, ssl_enablement defaults to "strict." If you don't have SSL enabled, the
installer will exit with an error. As a workaround or for testing purposes, you can set ssl_enablement to “auto”.
If you run the search command to install Enterprise Security in Splunk Web, you can review the progress of the
installation as search results. If you run the search command from the command line, you can review the
installation log in: $SPLUNK_HOME/var/log/splunk/essinstaller2.log.
Perform the following for command line installation of Splunk ES on a SHC:
1. Download ES as above and place it on the deployer.
2. Install with the ./splunk install app <filename> command or perform a REST call to start the installation from
the server command line. For example:
curl -k -u admin:password https://localhost:8089/services/apps/local -d filename="true" -d name="<file
name and directory>" -d update="true" -v
3. On the deployer, use the Splunk software command line to run the following command:
splunk search '| essinstall --deployment_type shc_deployer' -auth admin:password
4. Restart with ./splunk restart only if SSL is changed from disabled to enabled or vice versa.
5. Use the deployer to deploy ES to the cluster members. From the deployer, run this command:
splunk apply shcluster-bundle
https://docs.splunk.com/Documentation/ES/6.6.0/Install/InstallEnterpriseSecurity
https://docs.splunk.com/Documentation/ES/6.6.0/Install/InstallEnterpriseSecuritySHC
Source: https://docs.splunk.com/File:AOB2.2_overall_procedure1.jpg
Practice using this app with a variety of data to understand the process. Review this design process after following the
instructions below for using the add-on builder to build a new add-on.
https://docs.splunk.com/Documentation/AddonBuilder/4.0.0/UserGuide/BeforeYouBegin
https://docs.splunk.com/Documentation/AddonBuilder/4.0.0/UserGuide/NameProject
8.2 Use the Add-on Builder to build a new add-on
Follow the above flowchart for building the new add-on, starting with the Create Add-on phase. Fill in the required fields
including name, author, version and description, and click Create. The Add-on Folder Name will automatically be
determined from the specified Add-on name.
Next, Configure Data Collection with a new input. In the first video below, a REST API is used as the source of the data
input, and is actively being queried to pull the data down to Splunk. There is also an option to Create Alert Actions,
which is not discussed in any detail here
Other modular inputs include shell commands or Python code. Recall that this step is not required for passive data
collection, e.g. where the data is available from a file monitor, for already indexed data, or for a manual file upload.
Active data sources require data input properties and parameters to be provided. Data input properties include the source
type name, input display name, input name, description and collection interval.
Data input parameter types include text, password, checkbox, radio button, dropdown, multiple dropdown or global
account. Drag and drop the relevant fields, specifying labels, help text or default text and values as appropriate.
Once Data Input Properties and Data Input Parameters are configured, proceed to Add-on Setup Parameters. This
may include proxy settings or global account settings.
Next, define the data input and test settings to ensure expected data is received without error. REST inputs will use a
REST URL, URL parameters and request headers, as well as the data input parameters that you specified earlier.
The form values for the parameters are captured using ${field_name} and specified the same way in the REST URL:
Test the configuration settings, troubleshooting as required, and Save when ready. You will be advised when the process
is Done with the option to add additional data inputs or field extractions:
At this point, the add-on is created on the local system with the name you specified, and the setup page can be validated.
Open the newly created add-on, and click on Add New Input.
Specify a relevant index with the rest of the configuration, and click Add when ready.
Though not listed in the flow diagram above for the polling of active data, Manage Source Types ensures appropriate
event and line breaking, as well as timestamp extraction. This should be a familiar process based on content covered in
Splunk Administration or earlier courses.
Review the data and the current extracted fields. There will likely be fields that aren’t intuitive, or don’t align with the
field names used in CIM data models, so field aliases are required to provide this mapping. Start by returning to the Add-
on Builder to open the newly created add-on, and click on Extract Fields in the menu bar.
Review the source types and the Parsed Format. If this shows as Unparsed Data, click on Assisted Extractions to
update this to the relevant type such as Key Value, JSON, Table or XML data, and click Save. If the data is unstructured,
no further changes are required here.
Click on Map to Data Models in the menu bar. Create a New Data Model Mapping, and you will be prompted to enter
a name of the event type, select one or more source types, and enter a search. Upon selecting the source type, the search
will automatically populate to reference your selection. Click Save.
The next screen will provide event type fields on the left, and data model fields on the right. In the middle section, click
on New Knowledge Object and choose FIELDALIAS. Click on the event type field from the left hand side to populate
the field in the middle. If a data model is selected, the data model field can be selected. Otherwise, simply type the name
of the desired Data Model Field and click OK. When all the required mappings are entered, click on Done to return to
the Data Model Mapping page.
Note that if a data model was not selected, the Data Model Source will display as a dash, but the field aliases are present.
Searching on the index will now display both the original field names and the corresponding field aliases.
Finally, click on Validate & Package and click on Validate. If prompted, click on Update Settings to provide your
credentials to connect to the App Certification service on Splunk.com. Test the credentials and Save when ready.
Once this has been configured, click on Validate to produce an Overall Health Report. If the package looks good and
has no errors, click on Download Package to download the SPL file, which can be renamed to a .zip extension for
manual examination of the add-on configuration files.
The second YouTube video below shows a passive collection approach using test data and an existing CIM model for
Network Traffic. I encourage you to watch both videos and gain hands-on experience in progressing through the stages of
creating an add-on using either passive or active data sources. As a challenge, try following the process for creating a new
source type using custom data of your choosing, and for bonus points, try creating your own datamodel and datasets.
Though there are numerous steps above, the overall process is reasonably straightforward once you’ve got some
hands-on experience. Though this topic has a low weighting, it’s possible that you one question may reflect the
entire 5%, so following along with the videos and practicing with the free add-on builder will be far easier than
attempting to memorise the above.
https://docs.splunk.com/Documentation/AddonBuilder/4.0.0/UserGuide/UseTheApp
https://www.youtube.com/watch?v=-pzyvQMLmf0
https://www.youtube.com/watch?v=cJw3IAgbBV0
9.0 Tuning Correlation Searches (10%)
9.1 Configure correlation search scheduling and sensitivity
Correlation searches underpin the generation of notable events for alerting on potential security incidents. They are
managed from the ES menu under Content Management. From here, locate the correlation search you want to change,
and in the Actions column, you have the option to change between real-time and scheduled searches.
Use a real-time scheduled search to prioritise current data and performance. These are skipped if the search cannot be
run at the scheduled time. Real-time schedule searches do not backfill gaps in data that occur if the search is skipped.
Use a continuous schedule to prioritise data completion, as these are never skipped.
Optionally modify the cron schedule to control the search frequency. Higher frequency facilitates faster response, but if
related data is expected over an extended period, reduced frequency may be more appropriate. If you are not familiar with
cron schedules, take a look at https://crontab.guru for more information.
Optionally specify a schedule window for the search. A value of 0 means that a schedule window will not be used, while
auto allows the scheduler to automatically set a schedule window. Manual configuration can also be defined in minutes.
If multiple scheduled reports run at the same time, a schedule window allows this search to be deferred in favour of
higher-priority searches. Optionally specify a schedule priority such as High or Highest to ensure it runs at the earliest
available instance for the scheduled time.
If manually converting a real time search to a scheduled search, review the time range, which defaults as -5m@m to
+5m@m, and consider updating use of | datamodel from real-time searches to | tstats for efficiency. If you use Guided
Mode to convert the search, it can automatically switch from datamodel to tstats for you. You will either have the option
to edit a Guided Mode search or manually edit the search, but not both. Choosing to Edit search in guided mode will
replace the existing search with a new search.
In regards to sensitivity, correlation searches typically have trigger conditions for adaptive response actions, such as the
generation of notable events. From the ES menu bar, click Configure → Content → Content Management and select
the title of the correlation search you want to edit.
Type a Window duration. Unlike the schedule window duration above, which is the time allowed for the search to run, a
Window duration is the period of time for which no future alerts will be generated by the matching events. Be careful
not to confuse these two terms. The Fields to group by specifies which fields to use when matching similar events. If the
fields listed here match a generated alert, the correlation search will not create a new alert. Multiple fields can be defined
based on the fields returned by the correlation search.
E.g. A window duration of 30m with grouping fields of src and dest means that events with the same src AND the same
dest will not generate additional alerts during the 30m period, but events with the same src and different dest, or the same
dest and different source WILL generate new alerts for this period. Be careful not to filter out unique actions that should
be investigated. Window duration is appropriate when the additional events represent duplicate alerts or would result in
doubling up on investigate efforts from analysts.
https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Configurecorrelationsearches
https://docs.splunk.com/File:Search_event_grouping_flowchart.png
https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Configurecorrelationsearches
https://docs.splunk.com/Documentation/Splunk/latest/Search/Abouteventcorrelation
https://docs.splunk.com/Documentation/ES/3.0.1/Install/NotableEventSuppression
10.0 Creating Correlation Searches (10%)
10.1 Create a custom correlation search
Custom correlation searches can be broken down into 5 parts:
• Plan the use case for the correlation search
• Create the correlation search
• Schedule the correlation search
• Choose available adaptive response actions for the correlation search
Correlation searches identify data patterns that can indicate a security risk, which might include high-risk users, malware-
infected machines, vulnerability scanning, access control, or evidence of compromised accounts.
Start by defining a use case, being explicit with what you want to detect using the search. What data are you searching,
what are you searching for, what thresholds apply to the search, and how will this data be presented? E.g. search
authentication sources for unsuccessful login attempts, where 10 attempts are made within a rolling 60 minute interval
and present as a timechart.
Once the use case is defined, determine where the relevant data can be found. In this case, the Authentication data model
is a good candidate, but there may be authentication sources that are not CIM compliant or have not yet been mapped to
this data model. Take this opportunity to create the relevant CIM mappings so additional authentication searches can
reference a single datamodel source rather than multiple indexes and sourcetypes.
Next, create the search by navigating from the ES toolbar to Configure → Content → Content Management. Choose
Create New Content → Correlation Search and enter a search name and description. Select an appropriate app, such
as SA-AccessProtection for excessive failed logins. Set the UI Dispatch Context to None. If an app is selected, it will be
used by links in email and other adaptive response actions.
Correlation searches can then be created in Guided mode. From the correlation search, select Mode → Guided and
Continue to open the guided search editor. Select the appropriate data source, such as a Data Model or Lookup File. If
these aren’t feasible options, a manual search may be necessary.
For the example above, set the Data source to Data Model, and select the Authentication Data Model and
Failed_Authentication Dataset. Set Summaries only to Yes to only search accelerated data. Set Time Range to last 60
minutes, Preview the search, then click Next.
You can also filter the data to exclude specific field values, such as where ‘Authentication.dest’ != “127.0.0.1”. In this
example, leave the filter condition blank and click Next.
The remaining two steps are to aggregate and analyse your data. Aggregations typically involve count, but may also
include values. In this example, click Add a new aggregate, select the Function of values, and the Field of
Authentication.tag. Type tag in the Alias field.
Add additional aggregates for dc(Authentication.user) as user_count, dc(Authentication.dest) as dest_count, and the
count Function, with no attributes or alias field defined, for the overall count.
In the next section, split the aggregates by application (Authentication.app) and source (Authentication.src), aliasing as
app and src respectively, then click Next to define the correlation search match criteria.
To recap, we have aggregated tag values, with a count of users, destinations and events, and these aggregated events are
being split by the application and source values. E.g.
Source: https://splunkvideo.hubs.vidyard.com/watch/4y6kUbbkCWnXrX2yVQcoCy
The base risk score from systems and users can then be modified using the Risk Factor editor.
Additional included adaptive response actions include:
• Send an email
• Run a script
• Start a stream capture with Splunk Stream
• Ping a host
• Run Nbtstat
• Run Nslookup
• Add threat intelligence
• Create a Splunk Web message
See the link below on configure adaptive response for details on how to configure each of these.
When ready, Save the correlation search
https://docs.splunk.com/Documentation/ES/6.6.0/Tutorials/ResponseActionsCorrelationSearch
https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Configureadaptiveresponse
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usesummaryindexing
Data can be exported into formats including CSV, JSON, XML, PDF (for reports) and raw event format (for search
results that are raw events, and NOT calculated fields)
CLI export:
splunk search [eventdata] -preview 0 -maxout 0 -output [rawdata|json|csv|xml] >
[myfilename.log] ...
NB: rawdata is presented similarly to syslog data. PDF exports are only available from Splunk web exports.
splunk search "index=_internal earliest=09/14/2015:23:59:00 latest=09/16/2015:01:00:00 "
-output rawdata -maxout 200000 > c:/test123.dmp
In this example, up to 200,000 events of _internal index data in the given timerange are output in raw data format to
test123.dmp. Also, note the earliest and latest time formats of mm/dd/yyyy:hh:mm:ss. As this section addresses
data export, focus on the use of the -output parameter, and the available output formats.
REST API Export:
First, POST to the /services/search/jobs/ endpoint on the management interface:
curl -k -u admin:changeme https://localhost:8089/services/search/jobs/ -d search="search
sourcetype=access_* earliest=-7d"
Retrieve the <sid> value in the <response> for the search job ID. If you inadvertently close the window before capturing
the ID, it can also be retrieved from Activity → Jobs by opening the Job Manager. Locate the job you just ran and click
Inspect to open the Search Job Inspector, which contains the search job ID.
Next, use a GET request on the /results endpoint for the services namespace (NS) to export the search results to a file.
I.e. /servicesNS/<user>/<app>/search/jobs/<sid>/results/. Ensure you identify the following details:
• Object endpoints (visible from https://localhost:8089/servicesNS/<user>/<app>/)
• Search job user and app (as part of the URI path)
• Output format (atom | csv | json | json_cols | json_rows | raw | xml)
Note the extra REST output options of atom, json_cols and json_rows. An Atom Feed or Atom Syndication
Format is a standard XML response format used for a REST API
E.g. export results to a JSON file using REST API:
curl -u admin:changeme -k
https://localhost:8089/servicesNS/admin/search/search/jobs/1423855196.339/results/ --get
-d output_mode=json -d count=5
To summarise, a curl -d request POSTs to generate a search, and returns the SID. A second curl request uses
the --get parameter to retrieve the search, specifying the username from the previous search, the app name
(search), the SID for the /search/jobs/ endpoint, followed by the /results/ endpoint.
SDK Export:
Splunk SDKs support data export via Python SDK, Java SDK, JavaScript Export or C# SDK.
See the appendix for an example of a Python SDK export.
https://docs.splunk.com/Documentation/Splunk/8.2.2/Search/Exportsearchresults
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Uploaddata
https://docs.splunk.com/Documentation/Splunk/8.0.2/RESTUM/RESTusing
11.0 Lookups and Identity Management (5%)
11.1 Identify ES-specific lookups
Asset and identity management is derived from a number of defined lookups that contain data from specific sources such
as Active Directory. Custom identity and asset lookups can also be added and prioritised to enrich asset and identity data.
This data is then processed into categorised lookups. Finally, a number of macros and datamodels can be used to query
data elements, or the entire set of asset or identity data.
Assets:
| makeresults | eval src="1.2.3.4" | `get_asset(src)`
| `assets`
|`datamodel("Identity_Management", "All_Assets")` |`drop_dm_object_name("All_Assets")`
Identities:
| makeresults | eval user="VanHelsing" | `get_identity4events(user)`
| `identities`
|`datamodel("Identity_Management", "All_Identities")` |`drop_dm_object_name("All_Identities")`
The macro `drop_dm_object_name` removes the “All_Assets.” or “All_Identities.” prefix respectively from
results, making it much easier to reference the relevant fields. If multiple fields of the same name exist in
different datasets, you may choose not to pipe this macro to the end of the query.
Once individual asset and identity sources are defined and prioritised, they are merged into categorised lookups for asset
strings (zu), assets by CIDR range (zv), identity strings (zy) and default field correlation (zz). Each of these categories
aligns with a KV store collection, or a default fields correlation lookup for asset or identity.
Merged asset and Identity data
String-based asset assets_by_str KV store collection LOOKUP-zu-asset_lookup_by_str-dest
correlation LOOKUP-zu-asset_lookup_by_str-dvc
LOOKUP-zu-asset_lookup_by_str-src
You can also locate lookups under Settings → Lookups. Ensure you are familiar with the process of troubleshooting
lookups, and how lookups relate to the asset and identity management framework. Lookups can also be used for a number
of other purposes as seen in the tables below:
Lookup type Description Example
Threat intelligence collections Maintained by several modular inputs. Local Certificate Intel
Tracker Search-driven lookups used to supply data to dashboard panels. Malware Tracker
Per-panel filter lookup Used to maintain a list of per-panel filters on specific dashboards. HTTP Category Analysis Filter
Administrative Identities List You can use this lookup to identify privileged or administrative identities on
relevant dashboards such as the Access Center and Account Management
dashboards.
Asset/Identity Categories List You can use this to set up categories to use to organize an asset or identity.
Common categories for assets include compliance and security standards
such as PCI or functional categories such as server and web_farm. Common
categories for identities include titles and roles.
Assets Asset list You can manually add assets in your environment to this lookup to be
included in the asset lookups used for asset correlation.
Demonstration Assets Asset list Provides sample asset data for demonstrations or examples.
Demonstration Identities Identity list Provides sample identity data for demonstrations or examples.
ES Configuration Health Filter Per-panel filter lookup Per-panel filtering for the ES Configuration Health dashboard.
Expected Views List Lists Enterprise Security views for analysts to monitor regularly.
HTTP Category Analysis Filter Per-panel filter lookup Per-panel filtering for the HTTP Category Analysis dashboard
HTTP User Agent Analysis Per-panel filter lookup Per-panel filtering for the HTTP User Agent Analysis dashboard
Identities Identity list You can manually edit this lookup to add identities to the identity lookup
used for identity correlation.
IIN and LUHN Lookup List Static list of Issuer Identification Numbers (IIN) used to identify likely credit
card numbers in event data.
Interesting Ports List Used by correlation searches to identify ports that are relevant to your
network security policy.
Interesting Processes List Used by a correlation search to identify processes running on hosts relevant
to your security policy.
Interesting Services List Used by a correlation search to identify services running on hosts relevant to
your security policy.
Local * Intel Threat intel lookup Used to manually add threat intelligence.
Modular Action Categories List Used to categorize the types of adaptive response actions available to select.
New Domain Analysis Per-panel filter lookup Per-panel filtering for the New Domain Analysis dashboard.
PCI Domain Lookup Identity list Used by the Splunk App for PCI Compliance to enrich the pci_domain field.
Contains the PCI domains relevant to the PCI standard.
Primary Functions List Identifies the primary process or service running on a host. Used by a
correlation search.
Prohibited Traffic List Identifies process and service traffic prohibited in your environment. Used
by a correlation search.
Security Domains List Lists the security domains that you can use to categorize notable events
when created and on Incident Review.
Threat Activity Filter Per-panel filter lookup Per-panel filtering for the Threat Activity dashboard.
Traffic Size Analysis Per-panel filter lookup Per-panel filtering for the Traffic Size Analysis dashboard.
Urgency Levels List Urgency Levels contains the combinations of priority and severity that
dictate the urgency of notable events.
URL Length Analysis Per-panel filter lookup Per-panel filtering for the URL Length Analysis dashboard.
View the link below for “Manageinternallookups” for a sortable list of the table above. You don’t need to know the
individual fields in these lookups, but you should understand their general purpose. For example, consider how urgency
levels might be relevant in the context of asset & identity priorities and event severity. There are also 6 separate lookups
involving assets and identities. Understand how these relate to the Assets & Identities framework.
https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Verifyassetandidentitydata
https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Manageinternallookups
https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Assetandidentitylookups