0% found this document useful (0 votes)
746 views86 pages

N2OS UserManual SDK 22.0.0

Uploaded by

neur0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
746 views86 pages

N2OS UserManual SDK 22.0.0

Uploaded by

neur0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Notice

Legal notices
Publication Date
February 2022

Copyright
Copyright © 2013-2022, Nozomi Networks. All rights reserved.
Nozomi Networks believes the information it furnishes to be
accurate and reliable. However, Nozomi Networks assumes no
responsibility for the use of this information, nor any infringement of
patents or other rights of third parties which may result from its use.
No license is granted by implication or otherwise under any patent,
copyright, or other intellectual property right of Nozomi Networks
except as specifically described by applicable user licenses. Nozomi
Networks reserves the right to change specifications at any time
without notice.

Third Party Software


Nozomi Networks uses third-party software which usage is
governed by the applicable license agreements from each of the
software vendors. Additional details about used third-party software
can be found at https://security.nozominetworks.com/licenses.
| Table of Contents | v

Table of Contents

Legal notices.......................................................................................... iii

Chapter 1: Scriptable protocols............................................................ 7


Setup..............................................................................................................................................8
Writing a scriptable protocol..........................................................................................................9
API reference...............................................................................................................................13

Chapter 2: OpenAPI.............................................................................. 29
Setup............................................................................................................................................30
Errors........................................................................................................................................... 31
Query endpoint............................................................................................................................ 32
CLI endpoint................................................................................................................................ 34
Import CSV endpoint...................................................................................................................35
Import JSON endpoint.................................................................................................................36
Alerts endpoint.............................................................................................................................37
Traces endpoint...........................................................................................................................42
Users endpoint............................................................................................................................ 44
Traces endpoint...........................................................................................................................49
Reports endpoint......................................................................................................................... 53
Report templates endpoint.......................................................................................................... 56
Quarantine endpoint.................................................................................................................... 58

Chapter 3: Data Model.......................................................................... 59


Data Model reference..................................................................................................................60

Chapter 4: Data Integrations Best Practices...................................... 79


Data Sources for Integration....................................................................................................... 80
Nozomi Syslog Data Types.........................................................................................................80
OpenAPI Data............................................................................................................................. 82
Certify Your Integration with Nozomi.......................................................................................... 84
Chapter

1
Scriptable protocols
Topics: In this manual we will cover the Lua scripting API for building a
custom protocol decoder.
• Setup
• Writing a scriptable protocol
• API reference
| Scriptable protocols | 8

Setup
To add a new scriptable protocol:
1. Copy the Lua script in /data/scriptable_protocols/
2. Configure Guardian with this rule conf.user configure probe scriptable-protocol
<protocol_name> <script_name> in CLI (<script_name> is the name of the file including the
extension)
3. Execute service n2osids stop, the ids process will be restarted automatically.
After this steps the new protocol is loaded in Guardian and will analyze the network traffic.
| Scriptable protocols | 9

Writing a scriptable protocol


The language used to write a scriptable protocol is Lua, please refer to the official Lua documentation
(https://www.lua.org/start.html) to learn more.
This is a minimal protocol implementation:

function can_handle()
return true
end

From the example we can see that the only mandatory thing to do is to define a function called
can_handle which returns true if it recognize the target protocol.
Of course this implementation is not very useful and it will try to handle every packet so let's write
something more complex to detect and analyze some modbus traffic:

function can_handle()
return packet.source_port() == 502 or packet.destination_port() == 502
end

Here we can see a usage of the API to retrieve the packet ports. In this way the check is a bit more
accurate but it's still insufficient to detect a modbus packet in the real world.
Let's start to do some deep packet inspection:

function can_handle()
if data_size() < 8 then
return false
end

local has_right_port = packet.source_port() == 502 or


packet.destination_port() == 502

fwd(2)
local has_right_protocol_id = consume_n_uint16() == 0
local expected_length = consume_n_uint16()

return has_right_port and


has_right_protocol_id and
remaining_size() == expected_length
end

WARNING: don't use global variables. Variables defined outside of the can_handle and
update_status functions are global and their status is shared across every session of the same
protocol.
NOTE: the fwd and consume_* functions will move forward the payload pointer.
NOTE: the result of the remaining_size function depends on the position of the payload pointer.
In this example we use the API to inspect the content of the payload. First we check that there are
enough bytes, a modbus packet is at least 8 bytes long. Then we check the port in the same we did
in the previous example, then we skip two bytes with the function fwd and we read the next two 16 bit
integers. We check that the protocol id is zero and that the length written in the packet match with the
remaining bytes in our payload. If every check pass we return true saying to Guardian that the next
packets in this session should be analyzed by this protocol decoder.
| Scriptable protocols | 10

A protocol with just the can_handle function implemented will only create the node and the session in
the network view but the link is still missing from the graph, no additional information will be displayed
in the process view.
To extract more information from the modbus packets we are going to implement the update_status
function:

function get_protocol_type()
return ProtocolType.SCADA
end

function can_handle()
return is_modbus()
end

function update_status()
if not is_modbus() then
return
end

local is_request = packet.destination_port() == 502


local rtu_id = consume_uint8()
local fc = consume_uint8() & 0x7f

if is_request then
is_packet_from_src_to_dst(true)
set_roles("consumer", "producer")

if fc == 6 then
local address = consume_n_uint16()

local value = DataValue.new()


value.value = read_n_uint16()
value.cause = DataCause.WRITE
value.type = DataType.ANALOG
value.time = packet.time()

execute_update_with_variable(FunctionCode.new(fc), RtuId.new(rtu_id),
"r"..tostring(address), value)
return
end
end

execute_update()
end

NOTE: to avoid duplication we created a is_modbus function from the content of the previous
can_handle function.
NOTE: the is_modbus function has the effect to advance the payload pointer by 6 bytes, so when can
directly read the rtu_id without further payload pointer manipulations.
NOTE: we defined the get_protocol_type function to define the protocol type
In this example of update_status we read more data from the payload and we decode the
write single register request. We can understand the direction of the communication so we call
is_packet_from_src_to_dst with true to notify Guardian and create a link and we call
set_roles to set the roles on the involved nodes.
To insert a variable in Guardian there is the execute_update_with_variable function, it takes 4
arguments: the function code, the rtu id, the variable name and the value. The FunctionCode and
RtuId objects can be constructed from a string or a number, the DataValue object can be constructed
with the empty constructor and then filled with the available information.
| Scriptable protocols | 11

With the next example we cover a more complex case and we store some data in the session to handle
a request and a response:

local PENDING_FC = 1
local PENDING_START_ADDR = 2
local PENDING_REG_COUNT = 3

function update_status()
if not is_modbus() then
return
end

rwd()

local is_request = packet.destination_port() == 502


local transaction_id = consume_n_uint16()
fwd(4)

local rtu_id = consume_uint8()


local fc = consume_uint8() & 0x7f

if is_request then
is_packet_from_src_to_dst(true)
set_roles("consumer", "producer")
session.set_pending_request_number(transaction_id, PENDING_FC, fc)

if fc == 3 then
if remaining_size() < 4 then
return
end

local start_addr = consume_n_uint16()


local registers_count = consume_n_uint16()

session.set_pending_request_number(transaction_id, PENDING_START_ADDR,
start_addr)
session.set_pending_request_number(transaction_id, PENDING_REG_COUNT,
registers_count)
end
else
is_packet_from_src_to_dst(false)
local req_fc = session.read_pending_request_number(transaction_id,
PENDING_FC)

if fc == req_fc then
if fc == 3 then
local start_addr = session.read_pending_request_number(transaction_id,
PENDING_START_ADDR)
local reg_count = session.read_pending_request_number(transaction_id,
PENDING_REG_COUNT)
session.close_pending_request(transaction_id)

if remaining_size() < 1 then


return
end

local byte_count = consume_uint8()

if remaining_size() ~= byte_count or
reg_count * 2 ~= remaining_size() then
send_alert_malformed_packet("Packet is too small")
return
end

for i = 0, reg_count - 1, 1 do
local value = DataValue.new()
value.value = consume_n_uint16()
| Scriptable protocols | 12

value.cause = DataCause.READ_SCAN
value.type = DataType.ANALOG
value.time = packet.time()

execute_update_with_variable(FunctionCode.new(fc),
RtuId.new(rtu_id),
"r"..tostring(start_addr+i),
value)
end

return
end
end
end

execute_update()
end

This time we are focusing on the read holding register function code, to understand the communication
and create a variable we need to analyze both the request and the response and we need to keep
some data from the request and use it in the response. To achieve this we can use the functions
provided by the session object.
| Scriptable protocols | 13

API reference

Available LUA libraries


• base
• string
• table
• math
• debug
• utf8

Data types

Class FunctionCode
Constructors • FunctionCode.new(<string>)
• FunctionCode.new(<number>)

Class RtuId
Constructors • RtuId.new(<string>)
• RtuId.new(<number>)

Class DataValue
Constructors • DataValue.new()

Read/write • DataValue.value (number)


properties • DataValue.str_value (string)
• DataValue.cause (DataCause)
• DataValue.time (number, milliseconds since epoch)
• DataValue.type (DataType)

Class Variable
Methods • set_label(<string>)

Class Node
Methods • set_property(<key>, <value>)
• get_property(<key>)
• delete_property(<key>)
• set_label(<label>)

Enum DataCause
Values • DataCause.READ_SCAN
• DataCause.READ_CYCLIC
• DataCause.READ_EVENT
• DataCause.WRITE

Enum DataType
| Scriptable protocols | 14

Values • DataType.ANALOG
• the Analog type represents a floating point number
• DataType.DIGITAL
• the Digital type represents a boolean type and can be either 0 or 1
• DataType.BITSTRING
• the Bitstring type represents a raw value in the form of a sequence of
0 and 1, e.g. "00101110"
• DataType.STRING
• the String type represents a value in the form of a sequence of
printable characters
• DataType.DOUBLEPOINT
• the Double Point type represents a boolean value with an additional
degree of redundancy. It is commonly used in protocols such as
DNP3, IEC 104 or IEC 61850
• DataType.TIMESTAMP
• the Timestamp type represents a point in time in the format of
milliseconds from the epoch
• Note: Only ANALOG, DIGITAL and DOUBLEPOINT types are kept in
consideration by the Process Learning Engine when detecting deviations
from the baseline.

Enum ProtocolType
Values • ProtocolType.SCADA
• ProtocolType.NETWORK
• ProtocolType.IoT

Functions

Syntax data(<index>)
Parameters • index: the position of the byte to read, starting from 0

Description Return the value of the byte from the specified position, return 0 if index is
out of bounds

Syntax data_size()
Description Return the total size of the payload

Syntax remaining_size()
Description Return the size of the payload from the pointer to the end. The result
depends on the usage of functions fwd(), rwd() and consume_*().

Syntax fwd(<amount>)
Parameters • amount: the number of bytes to skip

Description Move the payload pointer by the specified number of bytes.

Syntax rwd()
Description Move the payload pointer to the beginning of the payload.
| Scriptable protocols | 15

Syntax read_uint8()
Description Read an unsigned 8bit integer at the payload pointer position.

Syntax read_int8()
Description Read an signed 8bit integer at the payload pointer position.

Syntax read_n_uint16()
Description Read a network order unsigned 16bit integer at the payload pointer position.

Syntax read_h_uint16()
Description Read a host order unsigned 16bit integer at the payload pointer position.

Syntax read_n_int16()
Description Read a network order signed 16bit integer at the payload pointer position.

Syntax read_h_int16()
Description Read a host order signed 16bit integer at the payload pointer position.

Syntax read_n_uint32()
Description Read a network order unsigned 32bit integer at the payload pointer position.

Syntax read_h_uint32()
Description Read a host order unsigned 32bit integer at the payload pointer position.

Syntax read_n_int32()
Description Read a network order signed 32bit integer at the payload pointer position.

Syntax read_h_int32()
Description Read a host order signed 32bit integer at the payload pointer position.

Syntax read_n_uint64()
Description Read a network order unsigned 64bit integer at the payload pointer position.

Syntax read_h_uint64()
Description Read a host order unsigned 64bit integer at the payload pointer position.

Syntax read_n_int64()
Description Read a network order signed 64bit integer at the payload pointer position.

Syntax read_h_int64()
Description Read a host order signed 64bit integer at the payload pointer position.

Syntax read_n_float()
Description Read a network order float at the payload pointer position.
| Scriptable protocols | 16

Syntax read_h_float()
Description Read a host order float at the payload pointer position.

Syntax read_n_double()
Description Read a network order double at the payload pointer position.

Syntax read_h_double()
Description Read a host order double at the payload pointer position.

Syntax read_string()
Description Read a string at the payload pointer position until the null terminator.

Syntax read_string_with_len(str_len)
Description Read a string at the payload pointer position for str_len bytes.

Syntax consume_uint8()
Description Read an unsigned 8bit integer at the payload pointer position and move the
pointer after the data.

Syntax consume_int8()
Description Read an signed 8bit integer at the payload pointer position and move the
pointer after the data.

Syntax consume_n_uint16()
Description Read a network order unsigned 16bit integer at the payload pointer position
and move the pointer after the data.

Syntax consume_h_uint16()
Description Read a host order unsigned 16bit integer at the payload pointer position and
move the pointer after the data.

Syntax consume_n_int16()
Description Read a network order signed 16bit integer at the payload pointer position
and move the pointer after the data.

Syntax consume_h_int16()
Description Read a host order signed 16bit integer at the payload pointer position and
move the pointer after the data.

Syntax consume_n_uint32()
Description Read a network order unsigned 32bit integer at the payload pointer position
and move the pointer after the data.

Syntax consume_h_uint32()
Description Read a host order unsigned 32bit integer at the payload pointer position and
move the pointer after the data.
| Scriptable protocols | 17

Syntax consume_n_int32()
Description Read a network order signed 32bit integer at the payload pointer position
and move the pointer after the data.

Syntax consume_h_int32()
Description Read a host order signed 32bit integer at the payload pointer position and
move the pointer after the data.

Syntax consume_n_uint64()
Description Read a network order unsigned 64bit integer at the payload pointer position
and move the pointer after the data.

Syntax consume_h_uint64()
Description Read a host order unsigned 64bit integer at the payload pointer position and
move the pointer after the data.

Syntax consume_n_int64()
Description Read a network order signed 64bit integer at the payload pointer position
and move the pointer after the data.

Syntax consume_h_int64()
Description Read a host order signed 64bit integer at the payload pointer position and
move the pointer after the data.

Syntax consume_n_float()
Description Read a network order float at the payload pointer position and move the
pointer after the data.

Syntax consume_h_float()
Description Read a host order float at the payload pointer position and move the pointer
after the data.

Syntax consume_n_double()
Description Read a network order double at the payload pointer position and move the
pointer after the data.

Syntax consume_h_double()
Description Read a host order double at the payload pointer position and move the
pointer after the data.

Syntax consume_string()
Description Read a string at the payload pointer position until the null terminator and
move the pointer after the data.

Syntax consume_string_with_len(str_len)
Description Read a string at the payload pointer position for str_len bytes and move
the pointer after the data.
| Scriptable protocols | 18

Syntax consume_xor_data(bytes_len, key, callback_function)


Description Read bytes_len bytes at the payload pointer position and apply the XOR
function with the byte in key at the same index. callback_function is
then invoked with the payload pointer changed to the trasformed payload.
When exiting from the callback function, the previous context is restored and
the pointer is moved after the data.
Note: key: must be an array of hex integers with a length greater or equal
than bytes_len.

Syntax consume_gzip_data(bytes_len, callback_function)


Description Read bytes_len bytes at the payload pointer position and decompress it
with gzip. callback_function is then invoked with the payload pointer
changed to the decompressed payload. When exiting from the callback
function, the previous context is restored and the pointer is moved after the
data.

Syntax consume_zlib_data(bytes_len, callback_function)


Description Read bytes_len bytes at the payload pointer position and decompress it
with zlib. callback_function is then invoked with the payload pointer
changed to the decompressed payload. When exiting from the callback
function, the previous context is restored and the pointer is moved after the
data.

Syntax compute_crc16(size, poly, init, xor_out, ref_in,


ref_out)
Parameters • size: the amount of bytes on which the CRC is computed
• poly, init, xor_out, ref_in, ref_out: the common CRC input parameters

Description Compute the CRC16 of the remaining payload according to the input
parameters. The input parameters for CRC functions can be easily found
online. For example, to get a CRC16/DNP the parameters are: 0x3D65,
0x0000, 0xFFFF, true, true

Syntax compute_crc32(size, poly, init, xor_out, ref_in,


ref_out)
Parameters • size: the amount of bytes on which the CRC is computed
• poly, init, xor_out, ref_in, ref_out: the common CRC input parameters

Description Compute the CRC32 of the remaining payload according to the input
parameters. The input parameters for CRC functions can be easily
found online. For example, to get a plain CRC32 the parameters are:
0x04C11DB7, 0xFFFFFFFF, 0xFFFFFFFF, true, true

Syntax set_roles(<client_role>, <server_role>)


Parameters • client_role: the role of the client
• server_role: the role of the server

Description Set the roles of the involved nodes, valid values are: "consumer",
"producer", "historian", "terminal", "web_server", "dns_server", "db_server",
"time_server", "other"

Syntax set_source_type(<node_type>)
| Scriptable protocols | 19

Parameters • node_type: the type of the source node

Description Set the type of the source node, valid values are: "switch", "router", "printer",
"group", "OT_device", "broadcast", "computer"

Syntax variables_are_on_client()
Parameters
Description Notify to Guardian that the variables should be added to the client node

syntax is_packet_from_src_to_dst(<is_from_src>)
parameters • is_from_src: true is the direction is from src to dst, false otherwise

description notify Guardian about the direction of the packet, this function must be
called to obtain a link creation

syntax execute_update()
parameters
description notify Guardian about the a packet, at least one variant of execute_update
should be called for every packet

syntax execute_update_with_function_code(<function_code>, <rtu


id>)
parameters • function_code: an object of type functioncode
• rtuid: an object of type rtuid

description notify Guardian about the a packet with a function code and a rtu id

syntax execute_update_with_variable(<function_code>, <rtu id>,


<var_name>, <value>)
parameters • function_code: an object of type functioncode
• rtu_id: an object of type rtuid
• var_name: the name of the variable
• value: an object of type datavalue containing the value of the variable
and some information about the data

description notify Guardian about the a packet with a function code, a rtu id, a variable
name and a variable value

syntax execute_update_with_function(<function_code>, <rtu id>,


<var_name>, <value>, <function>)
parameters • function_code: an object of type functioncode
• rtu_id: an object of type rtuid
• var_name: the name of the variable
• value: an object of type datavalue containing the value of the variable
and some information about the data
• function: the function will be called passing variable as an argument

description notify Guardian about the a packet with a function code, a rtu id, a variable
name, a variable value and a function that give the possibility to directly
access the variable
| Scriptable protocols | 20

syntax AlertFactory.new().new_net_device()
description raise an alert of type VI:NEW-NET-DEV

syntax AlertFactory.new().firmware_change()
description raise an alert of type SIGN:FIRMWARE-CHANGE

syntax AlertFactory.new().protocol_error(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:PROTOCOL-ERROR

syntax AlertFactory.new().wrong_time(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type PROC:WRONG-TIME

syntax AlertFactory.new().sync_asked_again(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type PROC:SYNC-ASKED-AGAIN

syntax AlertFactory.new().protocol_flow_anomaly(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type VI:PROC:PROTOCOL-FLOW-ANOMALY

syntax AlertFactory.new().variable_flow_anomaly(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type VI:PROC:VARIABLE-FLOW-ANOMALY

syntax AlertFactory.new().dhcp_request(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:DHCP-OPERATION

syntax AlertFactory.new().invalid_ip(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:INVALID-IP

syntax AlertFactory.new().new_arp(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type VI:NEW-ARP

syntax AlertFactory.new().duplicated_ip(<reason>)
parameters • reason: a message to be displayed in the alert
| Scriptable protocols | 21

description raise an alert of type SIGN:ARP:DUP

syntax AlertFactory.new().link_reconnection(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type NET:LINK-RECONNECTION

syntax AlertFactory.new().rst_from_producer(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type NET:RST-FROM-PRODUCER

syntax AlertFactory.new().tcp_syn(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type NET:TCP-SYN

syntax AlertFactory.new().tcp_syn_flood(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:TCP-SYN-FLOOD

syntax AlertFactory.new().tcp_flood(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:TCP-FLOOD

syntax AlertFactory.new().protocol_flood(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:PROTOCOL-FLOOD

syntax AlertFactory.new().mac_flood(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:MAC-FLOOD

syntax AlertFactory.new().network_scan(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:NETWORK-SCAN

syntax AlertFactory.new().cleartext_password(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:CLEARTEXT-PASSWORD

syntax AlertFactory.new().ddos_attack(<reason>)
| Scriptable protocols | 22

parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:DDOS

syntax AlertFactory.new().unsupported_func(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:UNSUPPORTED-FUNC

syntax AlertFactory.new().illegal_parameters(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:ILLEGAL-PARAMETERS

syntax AlertFactory.new().weak_password(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:PASSWORD:WEAK

syntax AlertFactory.new().malware_detected(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:MALWARE-DETECTED

syntax AlertFactory.new().unknown_rtu(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:PROC:UNKNOWN-RTU

syntax AlertFactory.new().missing_variable(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:PROC:MISSING-VAR

syntax AlertFactory.new().scada_injection(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:SCADA-INJECTION

syntax AlertFactory.new().new_variable(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type VI:PROC:NEW-VAR

syntax AlertFactory.new().new_variable_value(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type VI:PROC:NEW-VALUE


| Scriptable protocols | 23

syntax AlertFactory.new().device_state_change(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:DEV-STATE-CHANGE

syntax AlertFactory.new().configuration_change(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:CONFIGURATION-CHANGE

syntax AlertFactory.new().malicious_protocol(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:MALICIOUS-PROTOCOL

syntax AlertFactory.new().weak_encryption(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type SIGN:WEAK-ENCRYPTION

syntax AlertFactory.new().malformed_ot_packet(<triggerId>,
<reason>)
parameters • triggerId: identifier of the triggering engine entity
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:SCADA-MALFORMED

syntax AlertFactory.new().malformed_network_packet(<triggerId>,
<reason>)
parameters • triggerId: identifier of the triggering engine entity
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:NETWORK-MALFORMED

syntax AlertFactory.new().suspicious_time(<triggerId>,
<reason>)
parameters • triggerId: identifier of the triggering engine entity
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:SUSP-TIME

syntax AlertFactory.new().new_node(<nodeId>)
parameters • nodeId: identifier of the node

description raise an alert of type VI:NEW-SCADA-NODE

syntax AlertFactory.new().new_target_node(<nodeId>)
parameters • nodeId: identifier of the node

description raise an alert of type VI:NEW-NODE:TARGET


| Scriptable protocols | 24

syntax AlertFactory.new().new_node_malicious_ip(<nodeId>,
<threatName>)
parameters • nodeId: identifier of the node
• threatName: the name of the threat

description raise an alert of type VI:NEW-NODE:MALICIOUS-IP

syntax AlertFactory.new().new_mac_vendor(<nodeId>,
<macAddress>)
parameters • nodeId: identifier of the node
• macAddress: MAC Address

description raise an alert of type VI:GLOBAL:NEW-MAC-VENDOR

syntax AlertFactory.new().new_mac(<nodeId>, <macAddress>,


<reason>)
parameters • nodeId: identifier of the node
• macAddress: MAC Address
• reason: a message to be displayed in the alert

description raise an alert of type VI:NEW-MAC

syntax AlertFactory.new().malicious_domain(<domain>,
<threatName>, <reason>)
parameters • domain: the malicious domain
• threatName: the name of the threat
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:MALICIOUS-DOMAIN

syntax AlertFactory.new().malicious_url(<url>, <threatName>,


<reason>)
parameters • url: the malicious URL
• threatName: the name of the threat
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:MALICIOUS-URL

syntax AlertFactory.new().configuration_mismatch(<nodeId>,
<triggerId>, <reason>)
parameters • nodeId: identifier of the node
• triggerId: identifier of the triggering engine entity
• reason: a message to be displayed in the alert

description raise an alert of type VI:CONF-MISMATCH

syntax AlertFactory.new().multiple_ot_device_reservations(<sNodeId>,
<dNodeId>, <protocolId>, <bpfFilter>, <protocolType>,
<reason>)
| Scriptable protocols | 25

parameters • sNodeId: identifier of the source node


• dNodeId: identifier of the destination node
• protocolId: identifier of the protocol
• bpfFilter: BPF filter
• protocolType: type of the protocol according to the ProtocolType type
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:MULTIPLE-OT_DEVICE-RESERVATIONS

syntax AlertFactory.new().multiple_unsuccessful_logins(<sNodeId>,
<dNodeId>, <protocolId>, <bpfFilter>, <protocolType>,
<reason>)
parameters • sNodeId: identifier of the source node
• dstNodeId: identifier of the destination node
• protocolId: identifier of the protocol
• bpfFilter: BPF filter
• protocolType: type of the protocol according to the ProtocolType type
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:MULTIPLE-UNSUCCESSFUL-LOGINS

syntax AlertFactory.new().generic_event(<triggerId>, <reason>)


parameters • triggerId: identifier of the triggering engine entity
• reason: a message to be displayed in the alert

description raise an alert of type GENERIC:EVENT

syntax AlertFactory.new().multiple_access_denied(<sNodeId>,
<dNodeId>, <protocolId>, <bpfFilter>, <protocolType>,
<reason>)
parameters • sNodeId: identifier of the source node
• dNodeId: identifier of the destination node
• protocolId: identifier of the protocol
• bpfFilter: BPF filter
• protocolType: type of the protocol according to the ProtocolType type
• reason: a message to be displayed in the alert

description raise an alert of type SIGN:MULTIPLE-ACCESS-DENIED

syntax send_alert_malformed_packet(<reason>)
parameters • reason: a message to be displayed in the alert

description raise an alert of type sign:network-malformed or sign:scada-malformed

syntax notify_captured_url(<clientNodeId>, <serverNodeId>,


<url>, <user>, <operation>, <size>, <properties>)
| Scriptable protocols | 26

parameters • clientNodeId: node ID of the client


• serverNodeId: node ID of the server
• url: the captured URL to notify
• user: optional string parameter for the user related to the captured URL
• operation: optional string parameter describing the operation
• size: optional integer parameter reporting the size in bytes transferred
when accessing the URL
• properties: optional parameter in JSON format for properties

description notify a captured URL to the system. Note that captured URLs need to
be explicitly enabled by specifying the vi captured_urls enabled
configuration setting.

syntax notify_link_events(<event>, <parameters>)


parameters • event: event to notify
• parameters: JSON dictionary reporting the parameters associated with
the event

description notify a link event to the system. Note that link events need to be explicitly
enabled by specifying the vi link_events enabled configuration
setting.

syntax packet.source_id()
description return the source node id

syntax packet.destination_id()
description return the destination node id

syntax packet.source_ip()
description return the source node ip

syntax packet.destination_ip()
description return the destination node ip

syntax packet.source_mac()
description return the source node mac

syntax packet.destination_mac()
description return the destination node mac

syntax packet.source_port()
description return the source node port

syntax packet.destination_port()
description return the destination node port

syntax packet.is_ip()
description return true if the packet is an ip packet
| Scriptable protocols | 27

syntax packet.transport_type()
description return the transport layer type, can be "tcp", "udp", "ethernet", "icmp" or
"unknown"

syntax packet.source_node()
description returns the source node

syntax packet.destination_node()
description returns the destination node

syntax packet.time()
description return the packet time

syntax session.set_pending_request_number(<request_id>, <key>,


<value>)
parameters • request_id: a number used to uniquely identify the request
• key: a number used to separate different values in the same request
• value: the number to store

description store a number on the session

syntax session.read_pending_request_number(<request_id>, <key>)


parameters • request_id: a number used to uniquely identify the request
• key: a number used to separate different values in the same request

description read a number from the session

syntax session.set_pending_request_string(<request_id>, <key>,


<value>)
parameters • request_id: a number used to uniquely identify the request
• key: a number used to separate different values in the same request
• value: the string to store

description store a string on the session

syntax session.read_pending_request_string(<request_id>, <key>)


parameters • request_id: a number used to uniquely identify the request
• key: a number used to separate different values in the same request

description read a string from the session

syntax session.has_pending_request(<request_id>)
parameters • request_id: a number used to uniquely identify the request

description return true if there are values stored with the request_id

syntax session.has_pending_request_value(<request_id>, <key>)


parameters • request_id: a number used to uniquely identify the request
• key: a number used to separate different values in the same request

description return true if there are values stored with the request_id and key

syntax session.close_pending_request(<request_id>)
parameters • request_id: a number used to uniquely identify the request

description close the pending request and delete the associated data

syntax log_d(<msg>)
parameters • msg: the message to log

description log a debug message

syntax log_e(<msg>)
parameters • msg: the message to log

description log an error message


Chapter

2
OpenAPI
Topics: In this chapter we will cover our OpenAPI implementation, which
consists of an HTTP endpoint for executing custom queries.
• Setup
Open API methods that change appliance data produce audit
• Errors
logs, for example this happens when a new user is added through
• Query endpoint the Open API or an alert is acknowledged. By default, read only
• CLI endpoint operation don't produce audit logs. It's possible to change this
• Import CSV endpoint behavior and have GET Open API methods produce audit logs by
• Import JSON endpoint specifying the following cli command: conf.user configure
• Alerts endpoint open_api audit get enabled true.
• Traces endpoint
• Users endpoint
• Traces endpoint
• Reports endpoint
• Report templates endpoint
• Quarantine endpoint
| OpenAPI | 30

Setup
To perform a call to the endpoint you need to pass authentication credentials as headers, the examples
provided use Postman, an HTTP client.
Remember to use your Nozomi Networks Solution's web interface IP instead of the example one.
Nozomi Networks suggests to create dedicated users for OpenAPI usage, with minimal permissions
necessary to access the required data sources.

Figure 1: How to perform an authenticated call


| OpenAPI | 31

Errors
If you fail to provide valid authentication credentials the expected error will be 401 Unauthorized, as
shown below.

Figure 2: Example of a failed call

If you ask for a data source that does not exist you will receive a proper message in the error field.

Figure 3: Wrong data source


| OpenAPI | 32

Query endpoint
You can manipulate data sources through the use of queries, which are commands piped one after
another. Please refer to the Queries chapter of the User Manual, or head over to /#/query in your
Nozomi Networks Solution web interface to see some examples.
Requirements and Restrictions
1. A user having the permission to execute api.
2. The result contains the list of items queried.
3. It's possible, and recommended, to use pagination adding page and count params.
4. The page param is the number of the page to return, the count is the dimension of the page.
5. If count is not provided the default value is 100 thousand, if page is not provided default page
number is 1.
6. If the provided count value is higher than 100 thousand, no more than 100 thousand items will
be returned.
For example, to see how many nodes are in the system, call the following URL:
https://10.0.1.10/api/open/query/do?query=nodes | count

Figure 4: Example of a count query

As you can see there's no need to escape the query, let's see a more complex one:
https://10.0.1.10/api/open/query/do?query=nodes | where_link protocol ==
http | head 5. In the image we've used Postman's interface to collapse the results so you could
clearly see it's five as we wanted.
| OpenAPI | 33

Figure 5: Filtering HTTP and taking the first 5 results


| OpenAPI | 34

CLI endpoint
You can apply changes to the system by issuing CLI commands over this endpoint.
The endpoint is located at /api/open/cli and requires to be invoked with the cmd parameter with a
POST.

Figure 6: Example of a CLI command

CLI commands allow to change virtually anything inside the system, please refer to the Configuration
section of the User Manual for a more complete reference.
| OpenAPI | 35

Import CSV endpoint


/api/open/nodes/import allows you to enrich the information associated to nodes by uploading
a CSV file. Each row affects the nodes matching the specified ip field value. When there are no
matches, new nodes are created.
Requirements and Restrictions
1. The authenticated user must be in a group with admin role
2. Only CSV files with a header are accepted
3. There must be an ip column
4. In addition to ip, only the fields listed below and custom fields are considered. Every other provided
field will be ignored. If you need to provide values for custom fields, please make sure that the
names of these custom fields have been already created.

label
firmware_version
vendor
product_name
serial_number
os
mac_address

Example of CSV file


ip,label,firmware_version,vendor,product_name,serial_number
192.168.1.57,node 57,1.2.2,ACME,ACME Product 0,abcdefg

Figure 7: Example of the request


| OpenAPI | 36

Import JSON endpoint


/api/open/nodes/import_from_json allows you to enrich the information associated to nodes.
The provided information affects the nodes matching the specified ip field value. When there are no
matches, new nodes are created.
Requirements and Restrictions
1. The authenticated user must be in a group with admin role
2. The input must be a JSON dicionary containing a nodes key whose value is an array of nodes
information
3. Nodes must have a value for the ip field
4. In addition to ip, only the fields listed below and custom fields are considered. Every other provided
field will be ignored. If you need to provide values for custom fields, please make sure that the
names of these custom fields have been already created.

label
firmware_version
vendor
product_name
serial_number
os
mac_address

Figure 8: Example of the request


| OpenAPI | 37

Alerts endpoint
A POST to /api/open/alerts/close request allows you to close a group of alerts passed as a
json list of ids in the body of the request. You must also pass as parameter the close_action field
containing delete_rules or learn_rules in case you want to close alerts as security or as change.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The input data must be a JSON dictionary containing a ids key whose value must be an array of
alert 'id' and a 'close_action' constant field.
3. In case the request body does not adhere to the format described above the call returns a 422
error.
4. In case the request is well formed, the result will contain the id of the job in charge of the task. You
can monitor the status of the job via the alerts/close/status/:id API.

{
"ids": ["uuid"],
"close_action": "learn_rules"
}

Figure 9: Example of alerts close request

A GET to /api/open/alerts/close/status/:id request allows you to get the status of a job in


charge of a close alerts task.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. As last parameter of the path you need to specify the id of the job returned by the alerts/close
API.
3. The result will contain the status of the job, which can have one of the following values: SUCCESS,
PENDING or FAIL
4. In case of FAIL status, the error field will report the error reason.
| OpenAPI | 38

Figure 10: Example of alerts/close/status/:id request

A POST to /api/open/alerts/ack request allows you to ack/un-ack a group of alerts passed as a


json list of id/ack_status pairs in the body of the request.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The input data must be a JSON dictionary containing a data key whose value should be an array of
pairs with an alert 'id' and an 'ack' field. Ack can be true or false.
3. In case the request body does not adhere to the format described above the call returns a 422
error.
4. In case the request is well formed, the result will contain the id of the job in charge of the task. You
can monitor the status of the job via the alerts/ack/status/:id API.

{
"data": [
{
"id": "uuid",
"ack": true
}
]
}

Figure 11: Example of alerts/ack request

A GET to /api/open/alerts/ack/status/:id request allows you to get the status of a job in


charge of a ack/un-ack alerts task.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
| OpenAPI | 39

2. As last parameter of the path you need to specify the id of the job returned by the alerts/ack
API.
3. The result will contain the status of the job, which can have one of the following values: SUCCESS,
PENDING or FAIL
4. In case of FAIL status, the error field will report the error reason.

Figure 12: Example of alerts/ack/status/:id request

A GET to /api/open/alerts/all request allows you to get the IDs of alerts matching a condition.
You can specify a filter query in the query parameter and an additional parameter named has_trace
to get the status of the corresponding trace.
Requirements and Restrictions
1. The authenticated user has to belong to a group having admin role or with Alerts section enabled.
2. The query parameter should be in Nozomi Networks Query Language format, where the table
name is implicit, i.e. alerts.
3. The has_trace parameter type is boolean.
4. If no alert matches the specified conditions, a 404 error will be returned.

Figure 13: Example of alerts/all request

A GET to /api/open/alerts/:id/trace request allows you to get a file containing the trace of the
alert, whose id is specified as a parameter.
| OpenAPI | 40

Requirements and Restrictions


1. The authenticated user has to belong to a group having admin role or with Alerts section enabled.
2. The alert id should be passed in the path.
3. If the alert does not exist, a 422 error is returned.
4. In case there is no trace for the specified alert, a 404 error will be returned.

Figure 14: Example of alerts/:id/trace request

A POST to /api/open/alerts/import request allows you to import alerts by passing attributes in


JSON format in the body of the request.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role or with Alerts section enabled.
2. The alerts information should be provided in a JSON array named alerts.
3. In case the request body does not adhere to the format described above the call returns a 422
error.
4. In case the request is well formed, the result can contain the validation outcome for errors regarding
mandatory fields and warnings for fields that are potentially missing.
5. If one or more alerts are passing the validation, the result will also contain the id of the job in charge
of importing the alerts. You can monitor the status of the job via the alerts/import-status API.
| OpenAPI | 41

Figure 15: Example of alerts/import request

A GET to /api/open/alerts/import-status request allows you to get the status of a job in


charge of importing alerts.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. You need to specify the id of the job returned by the alerts/import API in the id parameter.
3. The result will contain the status of the job, which can have one of the following values: SUCCESS,
PENDING or FAIL
4. In case of FAIL status, the error field will report the error reason.

Figure 16: Example of alerts/import-status request


| OpenAPI | 42

Traces endpoint
A GET to /api/open/traces/all request allows you to get the traces matching a condition. You
can specify a filter query in the query parameter. You have to specify the operation parameter
defining the requested operation. So far the only allowed value for operation is download.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. As a result you will get a file containing the trace or the traces filtered according to the specified
condition
3. If the trace is still in progress or it is not found, a 422 error with a proper reason string will be
returned.

Figure 17: Example of a traces/all request

A GET to /api/open/traces/bpf-filter request allows you to select traces using a BPF filter.
This call returns a job id, while the actual disk search is performed asynchronously. The search will
return a list of the first PCAP traces that match the filter. The maximum number of PCAP traces is 50
by default and can be configured with the open_api bpf_filter traces_limit setting. There
can’t be more than a limited number of concurrent BPF trace searches at a time. This number is 2
by default and can be configured with the open_api bpf_filter max_concurrent_searches
setting.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
| OpenAPI | 43

Figure 18: Example of a BPF filter request

A GET to /api/open/traces/bpf-filter-status request allows you to get the status of a job in


charge of looking for traces given a BPF filter.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. You need to specify the id of the job returned by the traces/bpf_filter API in the id parameter.
3. The result will contain the status of the job, which can have one of the following values: SUCCESS,
PENDING or FAIL
4. In case of FAIL status, the error field will report the error reason.

Figure 19: Example of traces/bpf-filter-status request


| OpenAPI | 44

Users endpoint
A GET to /api/open/users allows you to get a list of all the users.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The result contains the list of all users.
3. It's possible to use pagination adding page and count params
4. The page param is the number of the page to return, the count is the dimension of the page.
5. If count is nil or 0 the default value will be 100, if page is nil or 0 the request will not be paginated.
6. This api is disabled by default; to enable it add conf.user configure api users enabled
true in CLI.

Figure 20: Example of users all request

A GET to /api/open/user_groups allows you to get a list of all the user groups.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The result contains the list of all user groups.
3. It's possible to use pagination adding page and count params
4. The page param is the number of the page to return, the count is the dimension of the page.
5. If count is nil or 0 the default value will be 100, if page is nil or 0 the request will not be paginated.
| OpenAPI | 45

Figure 21: Example of user groups all request

A GET to /api/open/users/:id allows you to get the user having the id passed as path parameter.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. As last parameter of the path you need to specify the id of the user.
3. The result will contain the user
4. In case the user with that id is not found you'll get a 404.

Figure 22: Example of users/:id request

A DELETE to /api/open/users/:id allows you to delete the user having the id passed as path
parameter.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. As last parameter of the path you need to specify the id of the user.
3. The result will contain the status code 204 for success else the error code
4. In case the user with that id is not found you'll get a 404.
| OpenAPI | 46

Figure 23: Example of delete users/:id request

A POST to /api/open/users allows you to create a new user.


Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The input must be a JSON dictionary containing the user fields properly populated
3. username is mandatory and unique.
4. password is mandatory and have to respect the password strength rules.
5. user_group_ids is mandatory, must contain at least an id of an existing user-group.
6. strategy can contain the value "local" or "saml".
7. is_suspended is a boolean.
8. should_update_pwd true if the user must update the password when log-in.
9. ssh_keys is the user SSH key if wants to connect via ssh to the instance.
10.allow_root_ssh true to allow the user having the ssh_key above to connect via SSH to the
instance.
11.In case the request is well formed return a 201 response with the id of the user created inside the
result.

{
"username": "user_under_test22",
"password": "aValidP4ss!",
"user_group_ids": [2],
"strategy": "local",
"is_suspended": false,
"should_update_pwd": false,
"ssh_keys": "an_ssh_key",
"allow_root_ssh": true
}
| OpenAPI | 47

Figure 24: Example of users/ack request

A PUT to /api/open/users/:id allows you to update the user with the id passed as path param.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. As last parameter of the path you need to specify the id of the user you want to update.
3. The input must be a JSON dictionary containing the user field properly populated
4. If the update goes well the call return 204 (No content) response
5. You can't update the password here because updating password is not idempotent so you can't do
via PUT.
6. The fields you can update are listed below.
7. user_group_ids must contain at least one valid id.

{
"username": "user_under_test22",
"strategy": "local",
"user_group_ids": [1,2],
"is_suspended": false,
"should_update_pwd": false,
"ssh_keys": "a_new_key",
"allow_root_ssh": true
}

Figure 25: Example of update users/:id request


| OpenAPI | 48

A PATCH to /api/open/users/:id/password allows you to change the password of the user


having the id passed as path param.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The user id should be passed in the path.
3. You need to pass the new password in the body.
4. New password must respect the password strength rules.
5. In case the password is valid will be return an empty response with status code 204.

{
"password": "4ValidP4ssw0rd!"
}

Figure 26: Example of users/:id/password request


| OpenAPI | 49

Traces endpoint
A GET request to /api/open/pcaps request allows you to get the list of all traces available on the
machine. You can pass the ID of a given trace to get information only for that trace.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role or with Upload traces section
enabled.
2. In case the request body does not adhere to the format described above, the call returns a 422
error.
3. If you specify an ID of a trace that does not exist, the call returns a 404 error.
4. If the request is accepted, the result will contain useful information on the retrieved trace.

Figure 27: Example of traces get all list request

Figure 28: Example of traces get by ID request

A DELETE request to /api/open/pcaps/:id request allows you to delete a given trace.


Requirements and Restrictions
| OpenAPI | 50

1. The authenticated user must be in a group having admin role or with Upload traces section
enabled.
2. In case the request body does not adhere to the format described above, the call returns a 422
error.
3. If you specify an ID od a trace that does not exist, the call returns a 404 error.
4. If the request is accepted, the trace will be deleted.

Figure 29: Example of trace delete request

A POST request to /api/open/pcaps/upload request allows you to upload a trace passed as a file
in the body of the request.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role or with Upload traces section
enabled.
2. The trace should be passed in the form-data section of the request body.
3. In case the request body does not adhere to the format described above, the call returns a 422
error.
4. If the file sent in the request is not a valid trace, the call returns a 422 error along with an error
reason describing the cause of the validation failure.
5. If the request is accepted, the trace will be uploaded.

Figure 30: Example of trace upload request

A POST request to /api/open/pcaps/import request allows you to import a trace file that is
already present in the machine.
| OpenAPI | 51

Requirements and Restrictions


1. The authenticated user must be in a group having admin role or with Upload traces section
enabled.
2. The trace file should be present in the /data/tmp directory of the machine.
3. The filename parameter of the request should contain the name of the trace file.
4. In case the request body does not adhere to the format described above, the call returns a 422
error.
5. If the trace file is not a valid trace, the call returns a 422 error along with an error reason describing
the cause of the validation failure.
6. If the request is accepted, the trace will be uploaded.

Figure 31: Example of trace import request

A PATCH request to /api/open/pcaps request allows you to replay a trace that has been previously
uploaded or imported.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role or with Upload traces section
enabled.
2. The trace should be present in the list of the available traces returned by the GET request to /api/
open/pcaps.
3. The id parameter of the request should contain the ID of the trace.
4. The use_packet_time boolean parameter should be set to true if you want to use the time of
the packets; false otherwise.
5. The data_to_reset_before_play parameter should be set to {} if you do not want to reset
data before playing the trace. Otherwise, you need to specify a JSON dictionary with the sections
you want to reset, for example {"alerts": true, "vi": true}. The list of all available
sections is the following:
• alerts_data
• assertions
• learning
• network_data
• process_data
• queries
• smart_polling_data
• timemachine_data
• traces_data
| OpenAPI | 52

• vi_data
• vulnerability_data
The list above reflects the options available for Data reset in the UI.
6. In case the request body does not adhere to the format described above, the call returns a 422
error.
7. If you specify an ID of a trace that does not exist, the call returns a 404 error.
8. If the request is accepted, the trace will be replayed.

Figure 32: Example of trace replay request

A PATCH request to /api/open/pcaps/note request allows you to change the note field of a trace.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role or with Upload traces section
enabled.
2. The trace should be present in the list of the available traces returned by the GET request to /api/
open/pcaps.
3. The id parameter of the request should contain the ID of the trace.
4. The note parameter of the request should contain the text you want to change.
5. In case the request body does not adhere to the format described above, the call returns a 422
error.
6. If the request is accepted, the note will be changed.

Figure 33: Example of trace note request


| OpenAPI | 53

Reports endpoint
A GET to /api/open/reports allows you to get a list of all the reports generated.
Requirements and Restrictions
1. A user having the permission to execute api.
2. The result contains the list of all the reports.
3. It's possible to use pagination adding page and count params
4. The page param is the number of the page to return, the count is the dimension of the page.
5. If count is nil or 0 the default value will be 100, if page is nil or 0 the request will not be paginated.
6. You can filter the result passing a template_name as query param having value the report
templates name you want filtering on.

Figure 34: Example of reports all request

A GET to /api/open/reports/:id allows you to get the report having the id passed as path
parameter.
Requirements and Restrictions
1. A user having the permission to execute api.
2. As last parameter of the path you need to specify the id of the report.
3. The result will contain the report.
4. In case the report with that id is not found you'll get a 404.
| OpenAPI | 54

Figure 35: Example of reports/:id request

A GET to /api/open/reports/:id/files allows you to download the report having the id passed
as path parameter.
Requirements and Restrictions
1. A user having the permission to execute api.
2. As middle parameter of the path you need to specify the id of the report.
3. The report download will be triggered.
4. In case the report with that id is not found you'll get a 404.

Figure 36: Example of reports/:id request

A POST to /api/open/reports allows you to create a new report.


Requirements and Restrictions
1. A user having the permission to execute api.
2. You need to pass as query param the report_template_id you want to create.
| OpenAPI | 55

3. In case the request is well formed return a 202 response with the id of the job is taking care of teh
request.

Figure 37: Example of create report request

A GET to api/open/reports/jobs/1/status allows you to get the create report job result.
Requirements and Restrictions
1. A user having the permission to execute api.
2. As parameter of the path you need to specify the id of the job.
3. The result will contain the job status

Figure 38: Example of reports/:id request


| OpenAPI | 56

Report templates endpoint


A GET to /api/open/report_templates allows you to get a list of all the report templates.
Requirements and Restrictions
1. A user having the permission to execute api.
2. The result contains the list of all the report templates.
3. It's possible to use pagination adding page and count params
4. The page param is the number of the page to return, the count is the dimension of the page.
5. If count is nil or 0 the default value will be 100, if page is nil or 0 the request will not be paginated.

Figure 39: Example of reports all request

A GET to /api/open/report_templates/:id allows you to get the report template having the id
passed as path parameter.
Requirements and Restrictions
1. A user having the permission to execute api.
2. As last parameter of the path you need to specify the id of the report template.
3. The result will contain the report template.
4. In case the report with that id is not found you'll get a 404.
| OpenAPI | 57

Figure 40: Example of reports/:id request


Quarantine endpoint
A GET request to /api/open/quarantine allows you to get a file from the quarantine directory.
Requirements and Restrictions
1. The authenticated user must be in a group having admin role.
2. The full path of the file must be specified in the file parameter and the format should be /data/
quarantine/<NAME>.
3. If you specify a path that does not exist, the call returns a 404 error.
4. If the request is accepted, the result will contain the actual file that Guardian extracted from traffic
and that the Sandbox classified as malicious.

Figure 41: Example of request

Hint: as shown in the top part of the previous screenshot, the file parameter to be used with the
request can be found in the properties field of SIGN:MALWARE-DETECTED alerts.
Chapter

3
Data Model
Topics: In this chapter we will cover our Data Model reference for query
entities.
• Data Model reference
| Data Model | 60

Data Model reference

alerts
Alerts represent events raised by the Guardian

id Primary key of this query source


type_id The Type ID represents a unique "class" of the
Alert, that characterizes what the Alert is about
in a unique way
name Name of the type ID. It can be updated
dynamically by the correlation engine.
description More details about the alert
severity Syslog-like severity
mac_src Source MAC address
mac_dst Destination MAC address
ip_src Source IP address
ip_dst Destination IP address
risk Risk, between 0 and 10
protocol The protocol in which this entity has been
observed
src_roles Roles of the source node
dst_roles Roles of the target node
time Timestamp in epoch milliseconds when this
entity was created or updated
ack True if the Alert has been acknowledged
id_src ID of the source node
id_dst ID of the destination node
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
zone_src Source zone
zone_dst Destination zone
appliance_id The id of the appliance where this entity has
been observed
port_src Source port
port_dst Destination port
label_src Label of the source node
label_dst Label of the destination node
trigger_id ID of the triggering engine entity
trigger_type Name of the trigger/engine
appliance_host The hostname of the appliance where this
entity has been observed
appliance_ip The IP of the appliance where this entity has
been observed
| Data Model | 61

transport_protocol Name of the transport protocol (e.g. tcp/udp/


icmp...)
is_security True if the alert is a Cybersecurity alert. False
otherwise (e.g. a network monitoring one)
note User-defined note about the Alert
appliance_site Site name of the appliance where this alert has
been generated
parents ID of parent incidents.
is_incident True if this Alert is an incident grouping more
alerts
properties JSON with additional information for this alert
created_time Time in epoch milliseconds when the alert has
been created
incident_keys (Internal use)
bpf_filter BPF filter for the entity, used when performing
traces for this entity
closed_time Time in epoch milliseconds when the alert has
been closed. 0 if still open.
status Status of the alert
session_id ID of the Session during which this alert was
raised
replicated This is true if the record has been replicated on
the replica machine
capture_device Name of the interface from which this entity
has been detected
threat_name In case of known threat, this holds the threat
name
type_name Name of the type ID. It is immutable.
sec_profile_visible True if the alert is visible according to the
Security Profile

appliances
This query source contains any kind of appliance on the CMC and Remote Collectors -- if any -- on the
Guardian

ip Last IP address of the appliance


last_sync Timestamp in epoch milliseconds when the last
full sync occurred
id Primary key of this query source
info JSON with miscellaneous information about
the appliance
allowed True if the appliance is in allowed state,
meaning that all its data will be pushed its
upstream appliance
sync_throughput Amount of throughput used for synchronization
purposes
| Data Model | 62

is_updating True if the appliance is currently applying a


software update
map_position (Internal use)
previous_alerts_count_last_5m (Internal use)
version_locked True if the appliance has been version locked
site Site name this appliance belongs to
host Host name of the appliance
time Timestamp in epoch milliseconds when this
entity was created or updated
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
replicated This is true if the record has been replicated on
the replica machine
deleted_at Time the entity was cancelled
health (Internal use)
appliance_id The id of the appliance where this entity has
been observed
appliance_ip The IP of the appliance where this entity has
been observed
appliance_host The hostname of the appliance where this
entity has been observed
force_update True if a force update has been issued to this
appliance
model Model of the appliance
last_seen_packet Point in time in epoch milliseconds when a
packet has been captured by the appliance
has_same_version_of_cmc (Internal use)
is_cmc True if the appliance is a CMC
is_guardian True if the appliance is a Guardian
is_remote_collector True if the appliance is a Remote Collector
has_smart_polling True if the Smart Polling is available

assertions
An assertion represent an automatic check against other query sources

query The query that is run as basis of the assertion


result True if the assertion is satisfied, false if it is
failing
name Name of the assertion
failed_since Time of since failure, in epoch milliseconds
id Primary key of this query source
can_send_alert True if the assertion will raise alerts
has_sent_alert True if the assertion has sent alerts in the past
bpf_filter BPF filter used to capture traffic on failure
| Data Model | 63

failures_count Number of failures


time Timestamp in epoch milliseconds when this
entity was created or updated
alert_delay Delay in seconds before an alert is raised. Can
be used as soft limit to handle flipping-states
situations.
can_request_trace True if a trace will be requested on failure
alert_risk Risk of raised alerts
is_security True if the assertion is a Cybersecurity
assertion. False otherwise (e.g. a network
monitoring one)
group_id (Internal use)
note Note about the assertion
deleted_at Time the entity was cancelled
replicated This is true if the record has been replicated on
the replica machine
synchronized True if this entity has been synchronized with
the upper CMC or Vantage or Vantage
propagate_to_appliances (Internal use)
propagated (Internal use)

assets
Assets represent a local, physical system to care about, and can be composed of one or more Nodes

name Name of the node (Note: This field is


automatically assigned by Guardian based on
the most reliable available information, such
as: address, network qualified names, nodes'
assigned labels, etc.)
level The purdue-model level of the asset
appliance_hosts The hostname(s) of the appliance(s) where this
entity has been observed
capture_device Name of the interface from which this entity
has been detected
ip IP address(es) of the asset. It can be either
IPv4, IPv6 or empty (in case of L2 node)
mac_address MAC address(es) of the asset. It can be
missing in some situations (serial nodes)
mac_address_level (for internal use)
vlan_id The VLAN ID(s) of the asset. It can be absent
if the traffic to/from the node is not VLAN-
tagged
mac_vendor MAC address vendor(s). Is not empty when the
MAC address is present and the corresponding
Vendor name is known
os Operating System of the asset, if available.
This field is not present when the
firmware_version is available
| Data Model | 64

roles The set of application-level roles of the asset.


Differently from the type, these are behaviors
vendor Vendor of the asset
vendor:info This is a metadata field about the vendor field
firmware_version The firmware version of the asset. The field is
not present when the os field is available
firmware_version:info This is a metadata field about the
firmware_version field
os_or_firmware Since os and firmware cannot be present at
the same time, this field allow to get either of
the two in a coalesce-like manner
serial_number The serial number of the asset
serial_number:info This is a metadata field about the
serial_number field
product_name The product name of the asset
product_name:info This is a metadata field about the
product_name field
type The type of the asset
type:info This is a metadata field about the type field
protocols The unique protocols used from and to this
asset
nodes The set of node id(s) that compose this asset
custom_fields Any additional custom field defined in the
custom Data Model
device_id (Internal use)
is_ai_enriched This field is true if this asset has been enriched
by Asset Intelligence

captured_logs
Logs captured passively over the network

id Primary key of this query source


time Timestamp in epoch milliseconds when this
entity was created or updated
appliance_id The id of the appliance where this entity has
been observed
appliance_ip The IP of the appliance where this entity has
been observed
appliance_host The hostname of the appliance where this
entity has been observed
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
id_src Source id of the packet where the log was
captured
id_dst Destination id of the packet where the log was
captured
| Data Model | 65

protocol The protocol in which this entity has been


observed
log Log contents
replicated This is true if the record has been replicated on
the replica machine
sync_time Timestamp in epoch milliseconds when the
event was synchronized

captured_urls
URLs and other protocol calls found in the network. Access to files, requests to DNS, requested URLs
and other are available in this query source.

id Primary key of this query source


id_src Source id of the packet where the URL was
captured
id_dst Destination id of the packet where the URL
was captured
protocol The protocol in which this entity has been
observed
time Timestamp in epoch milliseconds when this
entity was created or updated
url Captured URL
operation Operation performed to access the URL
username Username that performed the activity
size_bytes Size in bytes transferred when accessing the
URL
session_id ID of the Session during which this URL was
captured
properties JSON with additional information captured with
this event

function_codes
Function Codes used in the environment

id Primary key of this query source


protocol The protocol in which this entity has been
observed
fc The symbolic function code
count How many times this function code has been
used since restart of the system
description The description of the function code

health_log
Health-related events about the system - like high resource utilization or hardware-related issues or
events

id Primary key of this query source


| Data Model | 66

time Timestamp in epoch milliseconds when this


entity was created or updated
appliance_id The id of the appliance where this entity has
been observed
appliance_ip The IP of the appliance where this entity has
been observed
appliance_host The hostname of the appliance where this
entity has been observed
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
info JSON with the information captured with about
the event
replicated This is true if the record has been replicated on
the replica machine

link_events
Events that can occurr on a Link, like it being available or not

id_src Source node id


id_dst Destination node id
protocol The protocol in which this entity has been
observed
event Payload of the event
id Primary key of this query source
port_src Source port
port_dst Destination port
time Timestamp in epoch milliseconds when the
event was created
session_id ID of the Session during which this URL was
captured
transport_protocol Transport protocol used by the traffic
generating this event
params JSON with additional information captured with
this event

links
Links are protocol relations between two Nodes and with a specific protocol. They model the interaction
between Nodes

from Client node of the link


to Server node of the link
is_from_public True if client node is not a local node but an
outside, public IP.
is_to_public True if server node is not a local but an
outside, public IP.
from_zone Zone of the client node of the link
to_zone Zone of the server node of the link
| Data Model | 67

protocol The protocol in which this entity has been


observed
first_activity_time Timestamp in epoch milliseconds when this a
packet was sent on this link for the first time
last_activity_time Timestamp in epoch milliseconds when this a
packet was sent on this link for the last time
last_handshake_time Timestamp in epoch milliseconds when the last
TCP handshake has occurred on this link
transport_protocols Set of transport protocols observed for this link
tcp_handshaked_connections.total Total amount of TCP handshaked connections
tcp_handshaked_connections.last_5m Amount of TCP handshaked connections in the
last 5 minutes
tcp_handshaked_connections.last_15m Amount of TCP handshaked connections in the
last 15 minutes
tcp_handshaked_connections.last_30m Amount of TCP handshaked connections in the
last 30 minutes
tcp_connection_attempts.total Total amount of bytes for TCP SYN packets
tcp_connection_attempts.last_5m Amount of TCP SYN packets in the last 5
minutes
tcp_connection_attempts.last_15m Amount of TCP SYN packets in the last 15
minutes
tcp_connection_attempts.last_30m Amount of TCP SYN packets in the last 30
minutes
transferred.packets Total number of packets transmitted
transferred.bytes Total number of bytes transmitted
transferred.last_5m_bytes Number of bytes transmitted in the last 5
minutes
transferred.last_15m_bytes Number of bytes transmitted in the last 10
minutes
transferred.last_30m_bytes Number of bytes transmitted in the last 30
minutes
transferred.smallest_packet_bytes Smallest packet size in bytes observed
transferred.biggest_packet_bytes Biggest packet size in bytes observed
transferred.avg_packet_bytes Average packet size in bytes observed
tcp_retransmission.percent Percentage of TCP packets that have been
retransmitted
tcp_retransmission.packets Total number of TCP packets that have been
retransmitted
tcp_retransmission.bytes Total amount of bytes for TCP packets that
have been retransmitted
tcp_retransmission.last_5m_bytes Amount of bytes of TCP packets that have
been retransmitted in the last 5 minutes
tcp_retransmission.last_15m_bytes Amount of bytes of TCP packets that have
been retransmitted in the last 15 minutes
tcp_retransmission.last_30m_bytes Amount of bytes of TCP packets that have
been retransmitted in the last 30 minutes
| Data Model | 68

throughput_speed Live throughput for the entity


is_learned This is true for links that were observed during
the learning phase
is_fully_learned This is true for links that were observed also
during the learning phase and which properties
are not changed since then
is_broadcast True if this is not a real node but a broadcast
or multicast entry
has_confirmed_data True if data has been exchanged in both
directions, or more genererically if the data is
really flowing and is not a likely scan or alike
alerts The number of alerts being created around this
link
last_trace_request_time Last time in epoch milliseconds that a trace
has been asked on the link
active_checks List of active real-time checks on the entity
function_codes Set of function codes seen on this link
bpf_filter BPF filter for the entity, used when performing
traces for this entity

node_cpe_changes
On the event of update of a CPE, an entry in this query source is created to keep track of software
updates or better detection of software

id Primary key of this query source


node_id The id of the node this CPE refers to
cpe The old full CPE
cpe_part The old part piece of the CPE
cpe_vendor The old vendor piece of the CPE
cpe_product The old product piece of the CPE
cpe_version The old version piece of the CPE
cpe_update The old update piece of the CPE
new_cpe The CPE that has replaced the old one
new_cpe_vendor The CPE vendor that has replaced the old one
new_cpe_product The CPE product that has replaced the old one
new_cpe_version The CPE version that has replaced the old one
new_cpe_update The CPE update that has replaced the old one
node_cpe_id The ID of the Node CPE id (node_cpe query
source) entity to which this change event
relates to
time Timestamp in epoch milliseconds when this
entity was created or updated
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
appliance_id The id of the appliance where this entity has
been observed
| Data Model | 69

appliance_ip The IP of the appliance where this entity has


been observed
appliance_host The hostname of the appliance where this
entity has been observed
human_cpe_vendor The old human-readable version of the CPE
vendor
human_cpe_product The old human-readable version of the CPE
product
new_human_cpe_vendor The new human-readable version of the CPE
vendor
new_human_cpe_product The new human-readable version of the CPE
product
human_cpe_version The old human-readable version of the CPE
version
human_cpe_update The old human-readable version of the CPE
update
new_human_cpe_version The new human-readable version of the CPE
version
new_human_cpe_update The new human-readable version of the CPE
update
likelihood A value between 0.1 and 1.0 where 1.0
represents the maximum likelihood of the CPE
to be real. This is the old value.
new_likelihood A value between 0.1 and 1.0 where 1.0
represents the maximum likelihood of the CPE
to be real. This is the new value.
replicated This is true if the record has been replicated on
the replica machine
cpe_edition The old edition piece of the CPE
new_cpe_edition The new edition piece of the CPE
human_cpe_edition The old human-readable version of the CPE
edition
new_human_cpe_edition The new human-readable version of the CPE
edition

node_cpes
List CPEs (Common Platform Enumeration), that is software or component connected to a specific
Node in the system

id Primary key of this query source


node_id The id of the node this CPE refers to
cpe The full CPE
cpe_part The part piece of the CPE
cpe_vendor The vendor piece of the CPE
cpe_product The product piece of the CPE
cpe_version The version piece of the CPE
cpe_update The update piece of the CPE
| Data Model | 70

time Timestamp in epoch milliseconds when this


entity was created or updated
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
appliance_id The id of the appliance where this entity has
been observed
appliance_ip The IP of the appliance where this entity has
been observed
appliance_host The hostname of the appliance where this
entity has been observed
updated This is true if the record has been processed.
When false, the value of the record must not
be used.
cpe_translator Name of the CPE translator that produced this
CPE. For diagnostic purposes only.
human_cpe_vendor The human-readable version of the CPE
vendor
human_cpe_product The human-readable version of the CPE
product
human_cpe_version The human-readable version of the CPE
version
human_cpe_update The human-readable version of the CPE
update
likelihood A value between 0.1 and 1.0 where 1.0
represents the maximum likelihood of the CPE
to be real
replicated This is true if the record has been replicated on
the replica machine
cpe_edition The edition piece of the CPE
human_cpe_edition The human-readable version of the CPE
edition

node_cves
Vulnerabilities matched against current CPEs

id Primary key of this query source


node_id The id of the node this CPE refers to
cve The CVE id
cve_summary Summary of the vulnerability
cve_score The CVSS score assigned to this CVE
cve_creation_time The time this vulnerability has been discovered
(not installation specific, but CVE-specific)
cve_update_time The time this vulnerability has been updated
(not installation specific, but CVE-specific)
time Timestamp in epoch milliseconds when this
entity was created or updated
cwe_id ID of the category for this vulnerability
| Data Model | 71

cwe_name Name of the category for this vulnerability


matching_cpes List of CPE that allowed to match this
vulnerability
cve_references List of public references to check more
information about this vulnerability
likelihood A value between 0.1 and 1.0 where 1.0
represents the maximum likelihood of the CVE
to be really present
resolved It is true if the vulnerability has been resolved
resolved_reason Specifies the possible resolution element for a
vulnerability
resolved_source Specifies the source of information validating
the resolution status
installed_on (For internal use)

node_points
Data points polled via Smart Polling from monitored Nodes

id Primary key of this query source


node_id The id of the node this point refers to
strategy The strategy used to retrieve this point
time Timestamp in epoch milliseconds when this
entity was created or updated
name The name of the point
value (Deprecated) See content below
value_type The type of the point
human_name The human name of the point
content The actual content of the polled information

nodes
List of Nodes, where is node is an L2 or L3 or other entity able to speak some protocol

appliance_host The hostname of the appliance where this


entity has been observed
label Name of the node
id Primary key of this query source
ip IP address of the node. It can be either IPv4,
IPv6 or empty (in case of L2 node)
mac_address MAC address of the node. It can be missing in
some situations (serial nodes)
| Data Model | 72

mac_address:info This is a metadata field about the


mac_address field. The likelihood value within
the structure represents a level of confidence
regarding whether the MAC address is the
native one from the node, or it is one routed/
substituted by the network. The possible
values are unconfirmed (no information is
available), likely (some information indicates it
can be native), confirmed (it is certainly native).
mac_vendor MAC address vendor. Is not empty when the
MAC address is present and the corresponding
Vendor name is known.
subnet The subnet to which this node belongs, if any.
vlan_id The VLAN ID of the node. It can be absent if
the traffic to/from the node is not VLAN-tagged.
vlan_id:info This is a metadata field about the vlan_id field.
zone The zone name to which this node belongs to
level The purdue-model level of the node
type The type of the node
type:info This is a metadata field about the type field.
os Operating System of the node, if available.
This field is not present when the
firmware_version is available.
vendor Vendor of the node
vendor:info This is a metadata field about the vendor field.
product_name The product name of the node
product_name:info This is a metadata field about the
product_name field.
firmware_version The firmware version of the node. The field is
not presente when the os field is available.
firmware_version:info This is a metadata field about the
firmware_version field.
serial_number The serial number of the node
serial_number:info This is a metadata field about the
serial_number field.
is_broadcast True if this is not a real node but a broadcast
or multicast entry
is_public True if this not a local node but an outside,
public IP.
reputation This can be good or bad depending on
information coming from STIX indicators
is_confirmed This is true for nodes that are confirmed to
exist. Non-existing targets of port scans for
instance are not confirmed
is_compromised This is true for nodes that have been
recognised as compromised according to
threat indicators
| Data Model | 73

is_learned This is true for nodes that were observed


during the learning phase
is_fully_learned This is true for nodes that were observed also
during the learning phase and which properties
are not changed since then
is_disabled This is true for nodes that are hidden from
graphs because too noisy
roles The set of application-level roles of the node.
Differently from the type, these are behaviors.
links The set of links to which this node is related
links_count The total number of links from and to this node
protocols The unique protocols used from and to this
node
created_at Timestamp in epoch milliseconds when this
node was first observed
first_activity_time Timestamp in epoch milliseconds when this
node send a packet for the first time
last_activity_time Timestamp in epoch milliseconds when this
node send a packet for the last time
received.packets Total number of packets received
received.bytes Total number of bytes received
received.last_5m_bytes Number of bytes received in the last 5 minutes
received.last_15m_bytes Number of bytes received in the last 10
minutes
received.last_30m_bytes Number of bytes received in the last 30
minutes
sent.packets Total number of packets sent
sent.bytes Total number of bytes sent
sent.last_5m_bytes Number of bytes sent in the last 5 minutes
sent.last_15m_bytes Number of bytes sent in the last 10 minutes
sent.last_30m_bytes Number of bytes sent in the last 30 minutes
tcp_retransmission.percent Percentage of TCP packets that have been
retransmitted
tcp_retransmission.packets Total number of TCP packets that have been
retransmitted
tcp_retransmission.bytes Total amount of bytes for TCP packets that
have been retransmitted
tcp_retransmission.last_5m_bytes Amount of bytes of TCP packets that have
been retransmitted in the last 5 minutes
tcp_retransmission.last_15m_bytes Amount of bytes of TCP packets that have
been retransmitted in the last 15 minutes
tcp_retransmission.last_30m_bytes Amount of bytes of TCP packets that have
been retransmitted in the last 30 minutes
variables_count Amount of variables attached to the node
device_id (Internal use)
| Data Model | 74

properties Additional properties found by several


protocols attached to the node
custom_fields Any additional custom field defined in the
custom Data Model
bpf_filter BPF filter for the node, used when performing
traces for this node and as building block for
link traces too
device_modules Set of modules of this devices, if any
capture_device Name of the interface from which this entity
has been detected

report_files
Generated reports available for consultation

id Primary key of this query source


name Name of the report file
create_file_at Time the report was created
deleted_at Time the entity was cancelled
time Timestamp in epoch milliseconds when this
entity was created or updated
appliance_id The id of the appliance where this entity has
been observed
appliance_ip The IP of the appliance where this entity has
been observed
appliance_host The hostname of the appliance where this
entity has been observed
synchronized True if this entity has been synchronized with
the upper CMC or Vantage
replicated This is true if the record has been replicated on
the replica machine
created_by User that generated the report
user_groups User groups allowed to see the report
file_type Type of file generated

sessions
Live, mostly open Sessions between Nodes. A Session is a specific application-level connection
between nodes. A Link can hold one or more Session at a given time.

id Primary key of this query source


status Tells if the session is ACTIVE, CLOSED, etc
direction_is_known True if the session direction has been
discovered. If false, from and to may be
swapped.
from Client node id
to Server node id
from_zone Client zone
to_zone Server zone
| Data Model | 75

transport_protocol Transport protocol of the session


from_port Port on the client side
to_port Port on the server side
protocol The protocol in which this entity has been
observed
vlan_id The VLAN ID of the session. It can be absent if
the traffic of the session is not VLAN-tagged.
transferred.packets Total number of packets transmitted
transferred.bytes Total number of bytes transmitted
transferred.last_5m_bytes Number of bytes transmitted in the last 5
minutes
transferred.last_15m_bytes Number of bytes transmitted in the last 10
minutes
transferred.last_30m_bytes Number of bytes transmitted in the last 30
minutes
transferred.smallest_packet_bytes Smallest packet size in bytes observed
transferred.biggest_packet_bytes Biggest packet size in bytes observed
transferred.avg_packet_bytes Average packet size in bytes observed
throughput_speed Live throughput for the entity
first_activity_time Timestamp in epoch milliseconds when this
session was found for the first time
last_activity_time Timestamp in epoch milliseconds when this
session was detected for the last time
key (Internal use)
bpf_filter BPF filter for the entity, used when performing
traces for this entity

sessions_history
Archived Sessions. See the sessions query source for more information

id Primary key of this query source


status Tells if the session is ACTIVE, CLOSED, etc
direction_is_known True if the session direction has been
discovered. If false, from and to may be
swapped.
from Client node id
to Server node id
from_zone Client zone
to_zone Server zone
transport_protocol Transport protocol of the session
from_port Port on the client side
to_port Port on the server side
protocol The protocol in which this entity has been
observed
| Data Model | 76

vlan_id The VLAN ID of the session. It can be absent if


the traffic of the session is not VLAN-tagged.
transferred.packets Total number of packets transmitted
transferred.bytes Total number of bytes transmitted
transferred.last_5m_bytes Number of bytes transmitted in the last 5
minutes
transferred.last_15m_bytes Number of bytes transmitted in the last 10
minutes
transferred.last_30m_bytes Number of bytes transmitted in the last 30
minutes
transferred.smallest_packet_bytes Smallest packet size in bytes observed
transferred.biggest_packet_bytes Biggest packet size in bytes observed
transferred.avg_packet_bytes Average packet size in bytes observed
throughput_speed Live throughput for the session
first_activity_time Timestamp in epoch milliseconds when this
session was found for the first time
last_activity_time Timestamp in epoch milliseconds when this
session was detected for the last time
key (Internal use)
bpf_filter BPF filter for the entity, used when performing
traces for this entity

variable_history
History of values for Variables where history has been enabled

id Primary key of this query source


var_key Variable identifier this historic value belongs to
value The captured value of the variable
datatype The type of the variable value
time Timestamp in epoch milliseconds when this
entity was created or updated
quality_enum The quality values attached to the variable
value
client_node The client node involved in the communication
when observing the variable
function_code Function code used to access the variable

variables
Variables extracted via DPI from the monitored system

var_key The primary key of this data source


host The node to which this variable belongs to
host_label The label of the node to which this variable
belongs to
RTU_ID The RTU ID of the variable, if any. It is the
identifier of the subsystem in the producer to
which the variable belongs.
| Data Model | 77

name The name of the variable, likely an identifier of


the memory area
label The human-readable name of the variable
unit The unit for the value of the variable
scale The scale of the variable. By default it is 1.0,
and can be configured/changed with external
information.
offset The offset of the variable. By default it is 0.0,
and can be configured/changed with external
information.
type The type of the value of the variable.
is_numeric True if it represents a number
min_value The minimum observed value
max_value The maximum observed value
value The live, last observed value of the variable.
Upon restart, this value is unknown because it
needs to reflect the real time status.
bit_value The live, last observed value of the variable,
expressed in bits. Upon restart, this value is
unknown because it needs to reflect the real
time status.
last_value This is the last observed value, and is
persisted on reboots
last_value_is_valid True if the last value is valid (has valid quality)
last_value_quality The quality of the last value
last_cause The cause of the last value
protocol The protocol in which this entity has been
observed
last_function_code_info The last value function code information
last_function_code The last value function code
first_activity_time Timestamp in epoch milliseconds when this
variable was found for the first time
last_range_change_time Timestamp in epoch milliseconds when this
variable's range changed
last_activity_time Timestamp in epoch milliseconds when this
variable was detected for the last time
last_update_time Timestamp in epoch milliseconds of the last
valid quality
last_valid_quality_time Timestamp in epoch milliseconds of the last
time quality was valid
request_count The number of times this variable has been
accessed
changes_count The number of times this variable has changed
latest_bit_change Indices of the flipped bits during the latest
variable change
last_client The last node that accessed this variable (in
read or write mode)
history_status Tells if the history is eanbled or not on this
variable
active_checks List of active real-time checks on the entity
flow_status Tells the status of the flow, that is if the
variable has a cyclic behavior or not
flow_anomalies Reports anomalies in the flow, if any
flow_anomaly_in_progress Reports a flow anomaly is in progress or not
flow_hiccups_percent Shows the amount if hiccups in the flow
flow_stats.avg Shows the average access time
flow_stats.var Shows the variance of the access time
Chapter

4
Data Integrations Best Practices
Topics: This chapter details the best practices when using Nozomi Networks
data integration that is obtained via Syslog integration or OpenAPI
• Data Sources for Integration calls.
• Nozomi Syslog Data Types
• OpenAPI Data
• Certify Your Integration with
Nozomi
| Data Integrations Best Practices | 80

Data Sources for Integration


There are two primary sources that are used for accessing Nozomi Networks data: They are Syslog
messages, and the OpenAPI. Both of these options will be discussed in detail.

Syslog Events sent from the Nozomi Platform


When syslog output is sent from the Nozomi Platform, it is in the Common Event Format (CEF)
the Nozomi appliance will forward syslog messages to the destination on port 514. See the User
Manual Nozomi Networks Solution – N2OS document for specific details on configuring your Nozomi
appliances to forward syslog messages.
This can be obtained from your customer portal account or contacting your Nozomi Networks Delivery
representative.

OpenAPI data that is retrieved from the Nozomi Platform


There is one primary query api that can be used to get data from the Nozomi Platform, the query
call. This is quite powerful in that any query that can be performed in the web user interface can be
performed using the api.

Nozomi Syslog Data Types


For customers implementing syslog, Guardian generates three types of syslog events: Alerts, Health,
and Audit. A key point of information is that for Alert events should be identified by the Alert Type ID.
NOTE: The set of alert messages inside each alert type id category will only increase over time.
Therefore, do not perform primary searches on alert messages, rather, search on the Alert Type ID,
Health Type ID, and Audit Type ID.
Note that Nozomi has defined 6 custom label fields in our CEF implementation.

Custom Custom Field Label Description


Field

cs1 cs1Label Risk: Risk level for the alert

cs2 cs2Label IsSecurity: Is this a security alert

cs3 cs3Label Id: The Alert ID (not Alert Type ID) of the alert in the
Nozomi system

cs4 cs4Label Detail: The alert details

cs5 cs5Label Parents: The parent Ids of the alert if it is related to


others

cs6 cs6Label n2os_schema: This is the Nozomi Schema version

Ensure that your integration recognizes these custom labels and deals with them appropriately.

Syslog Messages
Alert Events
There are many alert types in the Nozomi environment. The N2OS User Manual contains a full
reference of Alert Types.
Alert Events in CEF have the following format, e.g.:
| Data Integrations Best Practices | 81

<137>Oct 17 2019 22:32:23 local-sg-19.x n2osevents[0]: CEF:0|Nozomi


Networks|N2OS|19.0.3-10142120_A2F44|SIGN:MALWARE-DETECTED|Malware detected|
9|
app=smb
dvc=172.16.248.11
dvchost=local-sg-19.x
cs1=9.0
cs2=true
cs3=d25c520f-7f79-4820-b5ae-d1b334b05c75
cs4={trigger_type: yara_rules, trigger_id: MALW_DragonFly2.yar}
cs5=["5740a157-08e8-490f-85ad-eef23657e3cb"]
cs6=1
cs1Label=Risk
cs2Label=IsSecurity
cs3Label=Id
cs4Label=Detail
cs5Label=Parents
cs6Label=n2os_schema
dst=172.16.0.55
dmac=00:0c:29:28:dd:c5
dpt=445
msg=Suspicious transferring of malware named 'TemplateAttack_DragonFly_2_0'
was detected involving resource '\\172.16.0.55\ADMIN
\CVcontrolEngineer.docx' after a 'read' operation [rule author: US-CERT
Code Analysis Team - improved by Nozomi Networks] [yara file name:
MALW_DragonFly2.yar]
src=172.16.0.253
smac=00:04:23:e0:04:1c
spt=1148
proto=TCP
start=1571351543431

Note the highlighted part of the Alert message. This is the Alert Type ID. This should be used as
the key for performing searches once Nozomi syslog events have been ingested into the integration
platform.
Best Practice: Ensure that your parsing logic extracts the appropriate data. If you are integrating with
CEF messages, a CEF parser must be used. Do not use regular expressions. This will ensure the
integration integrity in the future. When using the correct parser for the data that is expected, be sure to
test different inputs to ensure that data is correctly extracted from the messages.
Health Events
Health Events in CEF have the following format, e.g.:

<131>Oct 10 2019 15:57:48 local-sg-19.x n2osevents[0]: CEF:0|Nozomi


Networks|N2OS|19.0.3-10201846_FD825|HEALTH|Health problem|0|
dvchost=local-sg-19.x
cs6=1
cs6Label=n2os_schema
msg=LINK_DOWN_on_port_em0

Note the highlighted part of the Health message. This is the Health Type ID. This should be used as
the key for performing searches once Nozomi syslog events have been ingested into the integration
platform.
Best Practice: Ensure that your parsing logic extracts the appropriate data. If you are integrating with
CEF messages, a CEF parser must be used. Do not use regular expressions. This will ensure the
integration integrity in the future. When using the correct parser for the data that is expected, be sure to
test different inputs to ensure that data is correctly extracted from the messages.
Audit Events
Audit Events in CEF have the following format, e.g.:
| Data Integrations Best Practices | 82

<134>Oct 10 2019 16:00:18 local-sg-19.x n2osevents[0]: CEF:0|Nozomi


Networks|N2OS|19.0.3-10201846_FD825|AUDIT:SESSIONS:CREATE|User signed in|0|
dvchost=local-sg-19.x
cs1=Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:69.0) Gecko/20100101
Firefox/69.0
cs6=1
cs1Label=browser
cs6Label=n2os_schema
msg=User signed in
src=172.16.248.1
suser=admin
start=1570723218425

Note the highlighted part of the Audit message. This is the Audit Type ID. This should be used as
the key for performing searches once Nozomi syslog events have been ingested into the integration
platform.
Best Practice: Ensure that your parsing logic extracts the appropriate data. If you are integrating with
CEF messages, a CEF parser must be used. Do not use regular expressions. This will ensure the
integration integrity in the future. When using the correct parser for the data that is expected, be sure to
test different inputs to ensure that data is correctly extracted from the messages.

OpenAPI Data

API Users
Nozomi recommends the practice of creating a user specifically for the purposes of api access. This
provides a demarcation of responsibilities that is straight forward for auditing and traceability.
Best Practice: Create a user specifically for accessing the OpenAPI.
Authentication
Each call to one of the OpenAPI methods requires authentication. Currently, the OpenAPI supports
Basic Authentication. So, for example using CURL, if you have a Username and Password for your api
user, then you will pass the following header along with your query:
-H "Authorization: Basic <AUTH_TOKEN>"
Where <AUTH_TOKEN> is the base64 encoding of Username:Password.
Note that the language and method of implementation (e.g. CURL vs Java) will dictate how the
specifics of the Basic Authentication are performed.
Note: In case of querying the OpenAPI for data, the -k –user Username:Password option may be
used instead for Basic Authentication.

Querying Nozomi Appliances


Data that is retrieved from the OpenAPI is done by calling the OpenAPI HTTP interface on either a
Guardian or CMC appliance.
The query endpoint is very powerful and allows the integrator to manipulate data through the use of
queries. A full list of the available query data sources, commands, and functions is available in the
N2OS User Manual.
Note: The credentials of the user performing the OpenAPI call to query data must be in a group that
has the Queries and exports permission set. This allows the user to view the query section and to
export data.

Simple query example


This query will retrieve the nodes in Nozomi appliance:
curl -k -H “Authorization: Basic <AUTH_TOKEN>” https://<YourHost>/api/open/
query/do?query=nodes
| Data Integrations Best Practices | 83

If there are two nodes; the results will be similar to this:

{
"header": [
All of the headers…
],
"result": [
{ First Node data },
{ Second Node data }
],
"total": 2
}

Complex query example


If we want to have a complex query command, ensure that it is uriencoded properly. This query will
retrieve the count of nodes in the Nozomi appliance:
curl -k -H “Authorization: Basic <AUTH_TOKEN>” https://<YourHost>/api/open/
query/do?query=nodes%20%7C%20count
Note that the original query text “nodes | count” has been uri encoded to “nodes%20%7C%20count”.
Note that the language and method of implementation will dictate how the specifics of the uri encoding
are accomplished.
Uploading Asset Information to Nozomi Appliances
Data can also be uploaded into Guardian or CMC via to upload or enhance Asset information. This is
referred to as “importing” in the OpenAPI.
The import endpoint is simple and allows the integrator to upload node data through the use of import
statements. The list of the commands is available in the N2OS User Manual.
Note: The credentials of the user performing the OpenAPI call to import data must be in the admin
group to upload information into a Nozomi Appliance.

Import Example Using CURL with CSV file


Using a sample csv file, assets.csv that looks like this:
ip,label,firmware_version,vendor,product_name,serial_number,mac_address
192.168.1.60,CSV Uploaded Asset 1,1.2.2,ACME,ACME Product
1,abcdefge,00:01:02:03:04:06
192.168.1.61,CSV Uploaded Asset 2,1.2.2,ACME,ACME Product
2,abcdefge,00:11:12:13:14:16
The following command will upload these assets into the Guardian or CMC:
curl -k -X POST https://<YourHost>/api/open/nodes/import -H "Authorization:
Basic <AUTH_TOKEN>" -F file=@<PathTo>/assets.csv

Import Example Using CURL with JSON file


Using a sample json file, assets.json that looks like this:

{
"nodes": [
{
"ip": "1.2.3.8",
"label": "JSON_Uploaded_Asset_1",
"mac_address": "00:00:00:11:11:11",
"firmware_version": "1.2.3",
"product_name": "ACME_PLC_2",
"serial_number": "1-789A10-2",
"vendor": "ACME"
| Data Integrations Best Practices | 84

},
{
"ip": "1.2.3.3",
"label": "JSON_Uploaded_Asset_2",
"mac_address": "00:00:00:11:11:15",
"firmware_version": "1.2.2",
"product_name": "ACME_PLC_1",
"serial_number": "1-789A10-6",
"vendor": "ACME"
}
]
}

Depending on your CURL implementation, the file may have to be submitted using -d as in the example
below.
The following command will upload these assets into the Guardian or CMC:

curl -k -X POST https://<YourHost>/api/open/nodes/import_from_json -H


"Authorization: Basic <AUTH_TOKEN>" -H "Content-Type: application/json" -d
{"nodes":
[{"ip":"1.2.3.8","label":"JSON_Uploaded_Asset_1","mac_address":"00:00:00:11:11:11",
"firmware_version":"1.2.3","product_name":"ACME_PLC_2","serial_number":"1-789A10-2",
"vendor":"ACME"},{"ip":"1.2.3.3","label":"JSON_Uploaded_Asset_2",
"mac_address":"00:00:00:11:11:15","firmware_version":"1.2.2",
"product_name":"ACME_PLC_1","serial_number":"1-789A10-6","vendor":"ACME"}]}

Import Commands

Command HTTP Parameters Description

import_from_csv_file -F file=@</path/to/ This allows the import of asset


CSV_FILE> information from a csv file. The csv
file must have the appropriate column
headers present in the first line.

import_from_json -H 'Content-Type: This allows the import of asset


application/json' information from JSON data. Note
that the JSON data is specified in the
-d <JSON_DATA>
HTTP headers directly.

Downloading traces
Traces associated with an alert can be downloaded via the api as well. We will need the alert ID in
order to accomplish this. The following command will download a trace associated with an alert ID
<YourAlertID> to the file specified by <YourTraceFile>:
curl -k -X GET https://<YourHost>/api/open/alerts/<YourAlertID>/trace -H
"Authorization: Basic <AUTH_TOKEN>" -H "Content-Type: application/js on" --
output <YourTraceFile>

Certify Your Integration with Nozomi


If you would like your integration to be considered approved by Nozomi technical personnel, then you
will need to submit several items describing your integration with the Nozomi platform. These include
any marketing and technical materials that you may have created.
In addition, you will be required to perform a live demonstration of your integration with the Nozomi
platform.
To have your integration certified by Nozomi we will require the following items:
| Data Integrations Best Practices | 85

• When performing searches on messages, only search on message type IDs.


• This is especially important with regards to Alert Type ID’s.
• For CEF Syslog integrations, ensure that custom fields are mapped as necessary for your
environment.
• Nozomi has custom string labels defined in our CEF implementation.
• Please provide a configuration guide for integrating your product with the Nozomi Network platform.
• Please provide any relevant sales collateral you have produced (including solution briefs, videos,
etc.) for the integration with Nozomi.
• Schedule a live demo of the integration. This should include a complete walkthrough of the
integration from initial configuration to events flowing from the Nozomi platform into your
environment. Any Nozomi discovered discrepancies must be addressed prior to certification.
• Once Nozomi technical personnel have approved the materials then the integration will be
considered completed. The integration will then be granted Certified Technology Partner status.

Nozomi Certification Checklist


1. When performing searches on Alert messages, only search on Alert type IDs.
2. For CEF Syslog integrations, ensure that custom fields are mapped as necessary for your
environment.
3. Please provide a configuration guide for integrating your product with the Nozomi Network platform.
4. Please provide any relevant sales collateral you have produced (including solution briefs, videos,
etc.) for the integration with Nozomi.
5. Schedule a live demo of the integration. This should include a complete walkthrough of the
integration from initial configuration to events flowing from the Nozomi platform into your
environment.
6. Once Nozomi technical personnel have approved the materials then the integration will be
considered completed. The integration will then be granted Certified Technology Partner status.

You might also like