.. _writing-scripts:
===================
Writing Bro Scripts
===================
.. contents::
Understanding Bro Scripts
=========================
.. todo::
The MHR integration has changed significantly since the text was
written. We need to update it, however I'm actually not sure this
script is a good introductory example anymore unfortunately.
-Robin
Bro includes an event-driven scripting language that provides
the primary means for an organization to extend and customize Bro's
functionality. Virtually all of the output generated by Bro
is, in fact, generated by Bro scripts. It's almost easier to consider
Bro to be an entity behind-the-scenes processing connections and
generating events while Bro's scripting language is the medium through
which we mere mortals can achieve communication. Bro scripts
effectively notify Bro that should there be an event of a type we
define, then let us have the information about the connection so we
can perform some function on it. For example, the ``ssl.log`` file is
generated by a Bro script that walks the entire certificate chain and
issues notifications if any of the steps along the certificate chain
are invalid. This entire process is setup by telling Bro that should
it see a server or client issue an SSL ``HELLO`` message, we want to know
about the information about that connection.
It's often the easiest to understand Bro's scripting language by
looking at a complete script and breaking it down into its
identifiable components. In this example, we'll take a look at how
Bro queries the `Team Cymru Malware hash registry
`_ for downloads via
HTTP. Part of the Team Cymru Malware Hash registry includes the
ability to do a host lookup on a domain with the format
``MALWARE_HASH.malware.hash.cymru.com`` where ``MALWARE_HASH`` is the MD5 or
SHA1 hash of a file. Team Cymru also populates the TXT record of
their DNS responses with both a "last seen" timestamp and a numerical
"detection rate". The important aspect to understand is Bro already
generates hashes for files it can parse from HTTP streams, but the
script ``detect-MHR.bro`` is responsible for generating the
appropriate DNS lookup and parsing the response.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
Visually, there are three distinct sections of the script. A base
level with no indentation followed by an indented and formatted
section explaining the custom variables being provided (``export``) and another
indented and formatted section describing the instructions for a
specific event (``event log_http``). Don't get discouraged if you don't
understand every section of the script; we'll cover the basics of the
script and much more in following sections.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 7-11
Lines 7 and 8 of the script process the ``__load__.bro`` script in the
respective directories being loaded. The ``@load`` directives are
often considered good practice or even just good manners when writing
Bro scripts to make sure they can be
used on their own. While it's unlikely that in a
full production deployment of Bro these additional resources wouldn't
already be loaded, it's not a bad habit to try to get into as you get
more experienced with Bro scripting. If you're just starting out,
this level of granularity might not be entirely necessary though.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 12-24
The export section redefines an enumerable constant that describes the
type of notice we will generate with the logging framework. Bro
allows for redefinable constants, which at first, might seem
counter-intuitive. We'll get more in-depth with constants in a later
chapter, for now, think of them as variables that can only be altered
before Bro starts running. The notice type listed allows for the use
of the :bro:id:`NOTICE` function to generate notices of type
``Malware_Hash_Registry_Match`` as done in the next section. Notices
allow Bro to generate some kind of extra notification beyond its
default log types. Often times, this extra notification comes in the
form of an email generated and sent to a pre-configured address.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 26-44
The workhorse of the script is contained in the event handler for
``log_http``. The ``log_http`` event is defined as an event-hook in
the :doc:`/scripts/base/protocols/http/main` script and allows scripts
to handle a connection as it is being passed to the logging framework.
The event handler is passed an :bro:type:`HTTP::Info` data structure
which will be referred to as ``rec`` in body of the event handler.
An ``if`` statement is used to check for the existence of a data structure
named ``md5`` nested within the ``rec`` data structure. Bro uses the ``$`` as
a deference operator and as such, and it is employed in this script to
check if ``rec$md5`` is present by including the ``?`` operator within the
path. If the ``rec`` data structure includes a nested data structure
named ``md5``, the statement is processed as true and a local variable
named ``hash_domain`` is provisioned and given a format string based on
the contents of ``rec$md5`` to produce a valid DNS lookup.
The rest of the script is contained within a ``when`` block. In
short, a ``when`` block is used when Bro needs to perform asynchronous
actions, such a DNS lookup, to ensure that performance isn't effected.
The ``when`` block performs a DNS TXT lookup and stores the result
in the local variable ``MHR_result``. Effectively, processing for
this event continues and upon receipt of the values returned by
:bro:id:`lookup_hostname_txt`, the ``when`` block is executed. The
``when`` block splits the string returned into two seperate values and
checks to ensure an expected format. If the format is invalid, the
script assumes that the hash wasn't found in the respository and
processing is concluded. If the format is as expected and the
detection rate is above the threshold set by ``MHR_threshold``, two
new local variables are created and used in the notice issued by
:bro:id:`NOTICE`.
In approximately 15 lines of actual code, Bro provides an amazing
utility that would be incredibly difficult to implement and deploy
with other products. In truth, claiming that Bro does this in 15
lines is a misdirection; there is a truly massive number of things
going on behind-the-scenes in Bro, but it is the inclusion of the
scripting language that gives analysts access to those underlying
layers in a succinct and well defined manner.
The Event Queue and Event Handlers
==================================
Bro's scripting language is event driven which is a gear change from
the majority of scripting languages with which most users will have
previous experience. Scripting in Bro depends on handling the events
generated by Bro as it processes network traffic, altering the state
of data structures through those events, and making decisions on the
information provided. This approach to scripting can often cause
confusion to users who come to Bro from a procedural or functional
language, but once the initial shock wears off it becomes more clear
with each exposure.
Bro's core acts to place events into an ordered "event queue",
allowing event handlers to process them on a first-come-first-serve
basis. In effect, this is Bro's core functionality as without the
scripts written to perform discrete actions on events, there would be
little to no usable output. As such, a basic understanding of the
event queue, the events being generated, and the way in which event
handlers process those events is a basis for not only learning to
write scripts for Bro but for understanding Bro itself.
Gaining familiarity with the specific events generated by Bro is a big
step towards building a mind set for working with Bro scripts. The
majority of events generated by Bro are defined in the
built-in-function files or ``.bif`` files which also act as the basis for
online event documentation. These in-line comments are compiled into
an online documentation system using Broxygen. Whether starting a
script from scratch or reading and maintaining someone else's script,
having the built-in event definitions available is an excellent
resource to have on hand. For the 2.0 release the Bro developers put
significant effort into organization and documentation of every event.
This effort resulted in built-in-function files organized such that
each entry contains a descriptive event name, the arguments passed to
the event, and a concise explanation of the functions use.
.. btest-include:: ${BRO_SRC_ROOT}/build/scripts/base/bif/plugins/Bro_DNS.events.bif.bro
:lines: 29-54
Above is a segment of the documentation for the event
:bro:id:`dns_request` (and the preceeding link points to the
documentation generated out of that). It's organized such that the
documentation, commentary, and list of arguments precede the actual
event definition used by Bro. As Bro detects DNS requests being
issued by an originator, it issues this event and any number of
scripts then have access to the data Bro passes along with the event.
In this example, Bro passes not only the message, the query, query
type and query class for the DNS request, but also a then record used
for the connection itself.
The Connection Record Data Type
===============================
Of all the events defined by Bro, an overwhelmingly large number of
them are passed the :bro:type:`connection` record data type, in effect,
making it the backbone of many scripting solutions. The connection
record itself, as we will see in a moment, is a mass of nested data
types used to track state on a connection through its lifetime. Let's
walk through the process of selecting an appropriate event, generating
some output to standard out and dissecting the connection record so as
to get an overview of it. We will cover data types in more detail
later.
While Bro is capable of packet level processing, its strengths lay in
the context of a connection between an originator and a responder. As
such, there are events defined for the primary parts of the connection
life-cycle as you'll see from the small selection of
connection-related events below.
.. btest-include:: ${BRO_SRC_ROOT}/build/scripts/base/bif/event.bif.bro
:lines: 69-72,88,106-109,129,132-137,148
Of the events listed, the event that will give us the best insight
into the connection record data type will be
:bro:id:`connection_state_remove` . As detailed in the in-line
documentation, Bro generates this event just before it decides to
remove this event from memory, effectively forgetting about it. Let's
take a look at a simple script, stored as
``connection_record_01.bro``, that will output the connection record
for a single connection.
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_02.bro
Again, we start with ``@load``, this time importing the
:doc:`/scripts/base/protocols/conn/index` scripts which supply the tracking
and logging of general information and state of connections. We
handle the :bro:id:`connection_state_remove` event and simply print
the contents of the argument passed to it. For this example we're
going to run Bro in "bare mode" which loads only the minimum number of
scripts to retain operability and leaves the burden of loading
required scripts to the script being run. While bare mode is a low
level functionality incorporated into Bro, in this case, we're going
to use it to demonstrate how different features of Bro add more and
more layers of information about a connection. This will give us a
chance to see the contents of the connection record without it being
overly populated.
.. btest:: connection-record-01
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_01.bro
As you can see from the output, the connection record is something of
a jumble when printed on its own. Regularly taking a peek at a
populated connection record helps to understand the relationship
between its fields as well as allowing an opportunity to build a frame
of reference for accessing data in a script.
Bro makes extensive use of nested data structures to store state and
information gleaned from the analysis of a connection as a complete
unit. To break down this collection of information, you will have to
make use of use Bro's field delimiter ``$``. For example, the
originating host is referenced by ``c$id$orig_h`` which if given a
narritive relates to ``orig_h`` which is a member of ``id`` which is
a member of the data structure referred to as ``c`` that was passed
into the event handler." Given that the responder port
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base DNS scripts
can further populate the connection record. Let's load the
``base/protocols/dns`` scripts and check the output of our script.
Bro uses the dollar sign as its field delimiter and a direct
correlation exists between the output of the connection record and the
proper format of a dereferenced variable in scripts. In the output of
the script above, groups of information are collected between
brackets, which would correspond to the ``$``-delimiter in a Bro script.
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_02.bro
.. btest:: connection-record-02
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_02.bro
The addition of the ``base/protocols/dns`` scripts populates the
``dns=[]`` member of the connection record. While Bro is doing a
massive amount of work in the background, it is in what is commonly
called "scriptland" that details are being refined and decisions
being made. Were we to continue running in "bare mode" we could slowly
keep adding infrastructure through ``@load`` statements. For example,
were we to ``@load base/frameworks/logging``, Bro would generate a
``conn.log`` and ``dns.log`` for us in the current working directory.
As mentioned above, including the appropriate ``@load`` statements is
not only good practice, but can also help to indicate which
functionalities are being used in a script. Take a second to run the
script without the ``-b`` flag and check the output when all of Bro's
functionality is applied to the tracefile.
Data Types and Data Structures
==============================
Scope
-----
Before embarking on a exploration of Bro's native data types and data
structures, it's important to have a good grasp of the different
levels of scope available in Bro and the appropriate times to use them
within a script. The declarations of variables in Bro come in two
forms. Variables can be declared with or without a definition in the
form ``SCOPE name: TYPE`` or ``SCOPE name = EXPRESSION`` respectively;
each of which produce the same result if ``EXPRESSION`` evaluates to the
same type as ``TYPE``. The decision as to which type of declaration to
use is likely to be dictated by personal preference and readability.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_declaration.bro
Global Variables
~~~~~~~~~~~~~~~~
A global variable is used when the state of variable needs to be
tracked, not surprisingly, globally. While there are some caveats,
when a script declares a variable using the global scope, that script
is granting access to that variable from other scripts. However, when
a script uses the ``module`` keyword to give the script a namespace,
more care must be given to the declaration of globals to ensure the
intended result. When a global is declared in a script with a
namespace there are two possible outcomes. First, the variable is
available only within the context of the namespace. In this scenario,
other scripts within the same namespace will have access to the
variable declared while scripts using a different namespace or no
namespace altogether will not have access to the variable.
Alternatively, if a global variable is declared within an ``export { ... }``
block that variable is available to any other script through the
naming convention of ``MODULE::variable_name``.
The declaration below is taken from the
:doc:`/scripts/policy/protocols/conn/known-hosts` script and
declares a variable called ``known_hosts`` as a global set of unique
IP addresses within the ``Known`` namespace and exports it for use
outside of the ``Known`` namespace. Were we to want to use the
``known_hosts`` variable we'd be able to access it through
``Known::known_hosts``.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/conn/known-hosts.bro
:lines: 8-10, 32, 37
The sample above also makes use of an ``export { ... }`` block. When the module
keyword is used in a script, the variables declared are said to be in
that module's "namespace". Where as a global variable can be accessed
by its name alone when it is not declared within a module, a global
variable declared within a module must be exported and then accessed
via ``MODULE_NAME::VARIABLE_NAME``. As in the example above, we would be
able to access the ``known_hosts`` in a separate script variable via
``Known::known_hosts`` due to the fact that ``known_hosts`` was declared as
a global variable within an export block under the ``Known`` namespace.
Constants
~~~~~~~~~
Bro also makes use of constants, which are denoted by the ``const``
keyword. Unlike globals, constants can only be set or altered at
parse time if the ``&redef`` attribute has been used. Afterwards (in
runtime) the constants are unalterable. In most cases, redefinable
constants are used in Bro scripts as containers for configuration
options. For example, the configuration option to log password
decrypted from HTTP streams is stored in
``HTTP::default_capture_password`` as shown in the stripped down
excerpt from :doc:`/scripts/base/protocols/http/main` below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro
:lines: 8-10,19,20,118
Because the constant was declared with the ``&redef`` attribute, if we
needed to turn this option on globally, we could do so by adding the
following line to our ``site/local.bro`` file before firing up Bro.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_const_simple.bro
While the idea of a redefinable constant might be odd, the constraint
that constants can only be altered at parse-time remains even with the
``&redef`` attribute. In the code snippet below, a table of strings
indexed by ports is declared as a constant before two values are added
to the table through ``redef`` statements. The table is then printed
in a :bro:id:`bro_init` event. Were we to try to alter the table in
an event handler, Bro would notify the user of an error and the script
would fail.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_const.bro
.. btest:: data_type_const.bro
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_type_const.bro
Local Variables
~~~~~~~~~~~~~~~
Whereas globals and constants are widely available in scriptland
through various means, when a variable is defined with a local scope,
its availability is restricted to the body of the event or function in
which it was declared. Local variables tend to be used for values
that are only needed within a specific scope and once the processing
of a script passes beyond that scope and no longer used, the variable
is deleted. Bro maintains names of locals separately from globally
visible ones, an example of which is illustrated below. The script
executes the event handler :bro:id:`bro_init` which in turn calls the
function ``add_two(i: count)`` with an argument of ``10``. Once Bro
enters the ``add_two`` function, it provisions a locally scoped
variable called ``added_two`` to hold the value of ``i+2``, in this
case, ``12``. The ``add_two`` function then prints the value of the
``added_two`` variable and returns its value to the ``bro_init`` event
handler. At this point, the variable ``added_two`` has fallen out of
scope and no longer exists while the value ``12`` still in use and
stored in the locally scoped variable ``test``. When Bro finishes
processing the ``bro_init`` function, the variable called ``test`` is
no longer in scope and, since there exist no other references to the
value ``12``, the value is also deleted.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_local.bro
Data Structures
---------------
It's difficult to talk about Bro's data types in a practical manner
without first covering the data structures available in Bro. Some of
the more interesting characteristics of data types are revealed when
used inside of a data structure, but given that data structures are
made up of data types, it devolves rather quickly into a
"chicken-and-egg" problem. As such, we'll introduce data types from
a bird's eye view before diving into data structures and from there a
more complete exploration of data types.
The table below shows the atomic types used in Bro, of which the
first four should seem familiar if you have some scripting experience,
while the remaining six are less common in other languages. It should
come as no surprise that a scripting language for a Network Security
Monitoring platform has a fairly robust set of network centric data
types and taking note of them here may well save you a late night of
reinventing the wheel.
+-----------+-------------------------------------+
| Data Type | Description |
+===========+=====================================+
| int | 64 bit signed integer |
+-----------+-------------------------------------+
| count | 64 bit unsigned integer |
+-----------+-------------------------------------+
| double | double precision floating precision |
+-----------+-------------------------------------+
| bool | boolean (T/F) |
+-----------+-------------------------------------+
| addr | IP address, IPv4 and IPv6 |
+-----------+-------------------------------------+
| port | transport layer port |
+-----------+-------------------------------------+
| subnet | CIDR subnet mask |
+-----------+-------------------------------------+
| time | absolute epoch time |
+-----------+-------------------------------------+
| interval | a time interval |
+-----------+-------------------------------------+
| pattern | regular expression |
+-----------+-------------------------------------+
Sets
~~~~
Sets in Bro are used to store unique elements of the same data
type. In essence, you can think of them as "a unique set of integers"
or "a unique set of IP addresses". While the declaration of a set may
differ based on the data type being collected, the set will always
contain unique elements and the elements in the set will always be of
the same data type. Such requirements make the set data type perfect
for information that is already naturally unique such as ports or IP
addresses. The code snippet below shows both an explicit and implicit
declaration of a locally scoped set.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_set_declaration.bro
:lines: 1-4,22
As you can see, sets are declared using the format ``SCOPE var_name:
set[TYPE]``. Adding and removing elements in a set is achieved using
the ``add`` and ``delete`` statements. Once you have elements inserted into
the set, it's likely that you'll need to either iterate over that set
or test for membership within the set, both of which are covered by
the ``in`` operator. In the case of iterating over a set, combining the
``for`` statement and the ``in`` operator will allow you to sequentially
process each element of the set as seen below.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_set_declaration.bro
:lines: 17-21
Here, the ``for`` statement loops over the contents of the set storing
each element in the temporary variable ``i``. With each iteration of
the ``for`` loop, the next element is chosen. Since sets are not an
ordered data type, you cannot guarantee the order of the elements as
the ``for`` loop processes.
To test for membership in a set the ``in`` statment can be combined
with an ``if`` statement to return a true or false value. If the
exact element in the condition is already in the set, the condition
returns true and the body executes. The ``in`` statement can also be
negated by the ``!`` operator to create the inverse of the condition.
While we could rewrite the corresponding line below as ``if ( !(
587/tcp in ssl_ports ))`` try to avoid using this construct; instead,
negate the in operator itself. While the functionality is the same,
using the ``!in`` is more efficient as well as a more natural construct
which will aid in the readability of your script.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_set_declaration.bro
:lines: 13-15
You can see the full script and its output below.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_set_declaration.bro
.. btest:: data_struct_set_declaration
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_set_declaration.bro
Tables
~~~~~~
A table in Bro is a mapping of a key to a value or yield. While the
values don't have to be unique, each key in the table must be unique
to preserve a one-to-one mapping of keys to values. In the example
below, we've compiled a table of SSL-enabled services and their common
ports. The explicit declaration and constructor for the table on
lines 3 and 4 lay out the data types of the keys (strings) and the
data types of the yields (ports) and then fill in some sample key and
yield pairs. Line 5 shows how to use a table accessor to insert one
key-yield pair into the table. When using the ``in`` operator on a table,
you are effectively working with the keys of the table. In the case
of an ``if`` statement, the ``in`` operator will check for membership among
the set of keys and return a true or false value. As seen on line 7,
we are checking if ``SMTPS`` is not in the set of keys for the
ssl_services table and if the condition holds true, we add the
key-yield pair to the table. Line 12 shows the use of a ``for`` statement
to iterate over each key currently in the table.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_declaration.bro
.. btest:: data_struct_table_declaration
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_table_declaration.bro
Simple examples aside, tables can become extremely complex as the keys
and values for the table become more intricate. Tables can have keys
comprised of multiple data types and even a series of elements called
a "tuple". The flexibility gained with the use of complex tables in
Bro implies a cost in complexity for the person writing the scripts
but pays off in effectiveness given the power of Bro as a network
security platform.
The script below shows a sample table of strings indexed by two
strings, a count, and a final string. With a tuple acting as an
aggregate key, the order is the important as a change in order would
result in a new key. Here, we're using the table to track the
director, studio, year or release, and lead actor in a series of
samurai flicks. It's important to note that in the case of the ``for``
statement, it's an all or nothing kind of iteration. We cannot
iterate over, say, the directors; we have to iterate with the exact
format as the keys themselves. In this case, we need squared brackets
surrounding four temporary variables to act as a collection for our
iteration. While this is a contrived example, we could easily have
had keys containin IP addresses (``addr``), ports (``port``) and even a ``string``
calculated as the result of a reverse hostname lookup.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_complex.bro
.. btest:: data_struct_table_complex
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_struct_table_complex.bro
Vectors
~~~~~~~
If you're coming to Bro with a programming background, you may or may
not be familiar with a vector data type depending on your language of
choice. On the surface, vectors perform much of the same
functionality as associative arrays with unsigned integers as their
indices. They are however more efficient than that and they allow for
ordered access. As such any time you need to sequentially store data of the
same type, in Bro you should reach for a vector. Vectors are a
collection of objects, all of which are of the same data type, to
which elements can be dynamically added or removed. Since Vectors use
contiguous storage for their elements, the contents of a vector can be
accessed through a zero-indexed numerical offset.
The format for the declaration of a Vector follows the pattern of
other declarations, namely, ``SCOPE v: vector of T`` where ``v`` is
the name of your vector, and ``T`` is the data type of its members.
For example, the following snippet shows an explicit and implicit
declaration of two locally scoped vectors. The script populates the
first vector by inserting values at the end; it does that by placing
the vector name between two vertical pipes to get the vector's current
length before printing the contents of both Vectors and their current
lengths.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_vector_declaration.bro
.. btest:: data_struct_vector_declaration
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_vector_declaration.bro
In a lot of cases, storing elements in a vector is simply a precursor
to then iterating over them. Iterating over a vector is easy with the
``for`` keyword. The sample below iterates over a vector of IP
addresses and for each IP address, masks that address with 18 bits.
The ``for`` keyword is used to generate a locally scoped variable
called ``i`` which will hold the index of the current element in the
vector. Using ``i`` as an index to addr_vector we can access the
current item in the vector with ``addr_vector[i]``.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_vector_iter.bro
.. btest:: data_struct_vector_iter
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_struct_vector_iter.bro
Data Types Revisited
--------------------
addr
~~~~
The ``addr``, or address, data type manages to cover a surprisingly
large amount of ground while remaining succinct. IPv4, IPv6 and even
hostname constants are included in the ``addr`` data type. While IPv4
addresses use the default dotted quad formatting, IPv6 addresses use
the RFC 2373 defined notation with the addition of squared brackets
wrapping the entire address. When you venture into hostname
constants, Bro performs a little slight of hand for the benefit of the
user; a hostname constant is, in fact, a set of addresses. Bro will
issue a DNS request when it sees a hostname constant in use and return
a set whose elements are the answers to the DNS request. For example,
if you were to use ``local google = www.google.com;`` you would end up
with a locally scoped ``set[addr]`` with elements that represent the
current set of round robin DNS entries for google. At first blush,
this seems trivial, but it is yet another example of Bro making the
life of the common Bro scripter a little easier through abstraction
applied in a practical manner. (Note however that these IP addresses
will never get updated during Bro's processing, so often this
mechanism most useful for addresses that are expected to remain
static.).
port
~~~~
Transport layer port numbers in Bro are represented in the format of
``/``, e.g., ``22/tcp`` or
``53/udp``. Bro supports TCP(``/tcp``), UDP(``/udp``),
ICMP(``/icmp``) and UNKNOWN(``/unknown``) as protocol designations.
While ICMP doesn't have an actual port, Bro supports the concept of
ICMP "ports" by using the ICMP message type and ICMP message code as
the source and destination port respectively. Ports can be compared
for equality using the ``==`` or ``!=`` operators and can even be
compared for ordering. Bro gives the protocol designations the
following "order": ``unknown`` < ``tcp`` < ``udp`` < ``icmp``. For
example ``65535/tcp`` is smaller than ``0/udp``.
subnet
~~~~~~
Bro has full support for CIDR notation subnets as a base data type.
There is no need to manage the IP and the subnet mask as two seperate
entities when you can provide the same information in CIDR notation in
your scripts. The following example below uses a Bro script to
determine if a series of IP addresses are within a set of subnets
using a 20 bit subnet mask.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_subnets.bro
Because this is a script that doesn't use any kind of network
analysis, we can handle the event :bro:id:`bro_init` which is always
generated by Bro's core upon startup. On lines six and seven, two
locally scoped vectors are created to hold our lists of subnets and IP
addresses respectively. Then, using a set of nested ``for`` loops, we
iterate over every subnet and every IP address and use an ``if``
statement to compare an IP address against a subnet using the ``in``
operator. The ``in`` operator returns true if the IP address falls
within a given subnet based on the longest prefix match calculation.
For example, ``10.0.0.1 in 10.0.0.0/8`` would return true while
``192.168.2.1 in 192.168.1.0/24`` would return false. When we run the
script, we get the output listing the IP address and the subnet in
which it belongs.
.. btest:: data_type_subnets
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_type_subnets.bro
time
~~~~
While there is currently no supported way to add a time constant in
Bro, two built-in functions exist to make use of the ``time`` data
type. Both :bro:id:`network_time` and :bro:id:`current_time` return a
``time`` data type but they each return a time based on different
criteria. The ``current_time`` function returns what is called the
wall-clock time as defined by the operating system. However,
``network_time`` returns the timestamp of the last packet processed
be it from a live data stream or saved packet capture. Both functions
return the time in epoch seconds, meaning ``strftime`` must be used to
turn the output into human readable output. The script below makes
use of the :bro:id:`connection_established` event handler to generate text
every time a SYN/ACK packet is seen responding to a SYN packet as part
of a TCP handshake. The text generated, is in the format of a
timestamp and an indication of who the originator and responder were.
We use the ``strftime`` format string of ``%Y%M%d %H:%m:%S`` to
produce a common date time formatted time stamp.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_time.bro
When the script is executed we get an output showing the details of
established connections.
.. btest:: data_type_time
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/wikipedia.trace ${DOC_ROOT}/scripting/data_type_time.bro
interval
~~~~~~~~
The interval data type is another area in Bro where rational
application of abstraction makes perfect sense. As a data type, the
interval represents a relative time as denoted by a numeric constant
followed by a unit of time. For example, 2.2 seconds would be
``2.2sec`` and thirty-one days would be represented by ``31days``.
Bro supports ``usec``, ``msec``, ``sec``, ``min``, ``hr``, or ``day`` which represent
microseconds, milliseconds, seconds, minutes, hours, and days
respectively. In fact, the interval data type allows for a surprising
amount of variation in its definitions. There can be a space between
the numeric constant or they can crammed together like a temporal
portmanteau. The time unit can be either singular or plural. All of
this adds up to to the fact that both ``42hrs`` and ``42 hr`` are
perfectly valid and logically equivalent in Bro. The point, however,
is to increase the readability and thus maintainability of a script.
Intervals can even be negated, allowing for ``- 10mins`` to represent
"ten minutes ago".
Intervals in Bro can have mathematical operations performed against
them allowing the user to perform addition, subtraction,
multiplication, division, and comparison operations. As well, Bro
returns an interval when comparing two ``time`` values using the ``-``
operator. The script below amends the script started in the section
above to include a time delta value printed along with the connection
establishment report.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_interval.bro
This time, when we execute the script we see an additional line in the
output to display the time delta since the last fully established
connection.
.. btest:: data_type_interval
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/wikipedia.trace ${DOC_ROOT}/scripting/data_type_interval.bro
Pattern
~~~~~~~
Bro has support for fast text searching operations using regular
expressions and even goes so far as to declare a native data type for
the patterns used in regular expressions. A pattern constant is
created by enclosing text within the forward slash characters. Bro
supports syntax very similar to the Flex lexical analyzer syntax. The
most common use of patterns in Bro you are likely to come across is
embedded matching using the ``in`` operator. Embedded matching
adheres to a strict format, requiring the regular expression or
pattern constant to be on the left side of the ``in`` operator and the
string against which it will be tested to be on the right.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_pattern_01.bro
In the sample above, two local variables are declared to hold our
sample sentence and regular expression. Our regular expression in
this case will return true if the string contains either the word
``quick`` or the word ``fox``. The ``if`` statement on line six uses
embedded matching and the ``in`` operator to check for the existence
of the pattern within the string. If the statement resolves to true,
:bro:id:`split` is called to break the string into separate pieces.
``Split`` takes a string and a pattern as its arguments and returns a
table of strings indexed by a count. Each element of the table will
be the segments before and after any matches against the pattern but
excluding the actual matches. In this case, our pattern matches
twice, and results in a table with three entries. Lines 11 through 13
print the contents of the table in order.
.. btest:: data_type_pattern
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_type_pattern_01.bro
Patterns can also be used to compare strings using equality and
inequality operators through the ``==`` and ``!=`` operators
respectively. When used in this manner however, the string must match
entirely to resolve to true. For example, the script below uses two
ternary conditional statements to illustrate the use of the ``==``
operators with patterns. On lines 5 and 8 the output is altered based
on the result of the comparison between the pattern and the string.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_pattern_02.bro
.. btest:: data_type_pattern_02
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_type_pattern_02.bro
Record Data Type
----------------
With Bro's support for a wide array of data types and data structures,
an obvious extension is to include the ability to create custom
data types composed of atomic types and further data structures. To
accomplish this, Bro introduces the ``record`` type and the ``type``
keyword. Similar to how you would define a new data structure in C
with the ``typedef`` and ``struct`` keywords, Bro allows you to cobble
together new data types to suit the needs of your situation.
When combined with the ``type`` keyword, ``record`` can generate a
composite type. We have, in fact, already encountered a a complex
example of the ``record`` data type in the earlier sections, the
:bro:type:`connection` record passed to many events. Another one,
:bro:type:`Conn::Info`, which corresponds to the fields logged into
``conn.log``, is shown by the exerpt below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro
:lines: 10-12,16,17,19,21,23,25,28,31,35,37,56,62,68,90,93,97,100,104,108,109,114
Looking at the structure of the definition, a new collection of data
types is being defined as a type called ``Info``. Since this type
definition is within the confines of an export block, what is defined
is, in fact, ``Conn::Info``.
The formatting for a declaration of a record type in Bro includes the
descriptive name of the type being defined and the seperate fields
that make up the record. The individual fields that make up the new
record are not limited in type or number as long as the name for each
field is unique.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_record_01.bro
.. btest:: data_struct_record_01
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_record_01.bro
The sample above shows a simple type definition that includes a
string, a set of ports, and a count to define a service type. Also
included is a function to print each field of a record in a formatted
fashion and a :bro:id:`bro_init` event handler to show some
functionality of working with records. The definitions of the DNS and
HTTP services are both done inline using squared brackets before being
passed to the ``print_service`` function. The ``print_service``
function makes use of the ``$`` dereference operator to access the
fields within the newly defined Service record type.
As you saw in the definition for the ``Conn::Info`` record, other
records are even valid as fields within another record. We can extend
the example above to include another record that contains a Service
record.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_record_02.bro
.. btest:: data_struct_record_02
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_record_02.bro
The example above includes a second record type in which a field is
used as the data type for a set. Records can be reapeatedly nested
within other records, their fields reachable through repeated chains
of the ``$`` dereference operator.
It's also common to see a ``type`` used to simply alias a data
structure to a more descriptive name. The example below shows an
example of this from Bro's own type definitions file.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/init-bare.bro
:lines: 12,19,26
The three lines above alias a type of data structure to a descriptive
name. Functionally, the operations are the same, however, each of the
types above are named such that their function is instantly
identifiable. This is another place in Bro scripting where
consideration can lead to better readability of your code and thus
easier maintainability in the future.
Custom Logging
==============
Armed with a decent understanding of the data types and data
structures in Bro, exploring the various frameworks available is a
much more rewarding effort. The framework with which most users are
likely to have the most interaction is the Logging Framework.
Designed in such a way to so as to abstract much of the process of
creating a file and appending ordered and organized data into it, the
Logging Framework makes use of some potentially unfamiliar
nomenclature. Specifically, Log Streams, Filters and Writers are
simply abstractions of the processes required to manage a high rate of
incoming logs while maintaining full operability. If you've seen Bro
employed in an environment with a large number of connections, you
know that logs are produced incredibly quickly; the ability to process
a large set of data and write it to disk is due to the design of the
Logging Framework.
Data is written to a Log Stream based on decision making processes in
Bro's scriptland. Log Streams correspond to a single log as defined
by the set of name/value pairs that make up its fields. That data can
then be filtered, modified, or redirected with Logging Filters which,
by default, are set to log everything. Filters can be used to break
log files into subsets or duplicate that information to another
output. The final output of the data is defined by the writer. Bro's
default writer is simple tab separated ASCII files but Bro also
includes support for `DataSeries `_
and `Elasticsearch `_ outputs as well as
additional writers currently in development. While these new terms
and ideas may give the impression that the Logging Framework is
difficult to work with, the actual learning curve is, in actuality,
not very steep at all. The abstraction built into the Logging
Framework makes it such that a vast majority of scripts needs not go
past the basics. In effect, writing to a log file is as simple as
defining the format of your data, letting Bro know that you wish to
create a new log, and then calling the :bro:id:`Log::write` method to
output log records.
The Logging Framework is an area in Bro where, the more you see it
used and the more you use it yourself, the more second nature the
boilerplate parts of the code will become. As such, let's work
through a contrived example of simply logging the digits 1 through 10
and their corresponding factorial to the default ASCII log writer.
It's always best to work through the problem once, simulating the
desired output with ``print`` and ``fmt`` before attempting to dive
into the Logging Framework. Below is a script that defines a
factorial function to recursively calculate the factorial of a
unsigned integer passed as an argument to the function. Using
``print`` and :bro:id:`fmt` we can ensure that Bro can perform these
calculations correctly as well get an idea of the answers ourselves.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
.. btest:: framework_logging_factorial
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
The output of the script aligns with what we expect so now it's time
to integrate the Logging Framework. As mentioned above we have to
perform a few steps before we can issue the :bro:id:`Log::write`
method and produce a logfile. As we are working within a namespace
and informing an outside entity of workings and data internal to the
namespace, we use an ``export`` block. First we need to inform Bro
that we are going to be adding another Log Stream by adding a value to
the :bro:type:`Log::ID` enumerable. In line 3 of the script, we append the
value ``LOG`` to the ``Log::ID`` enumerable, however due to this being in
an export block the value appended to ``Log::ID`` is actually
``Factor::Log``. Next, we need to define the name and value pairs
that make up the data of our logs and dictate its format. Lines 5
through 9 define a new datatype called an ``Info`` record (actually,
``Factor::Info``) with two fields, both unsigned integers. Each of the
fields in the ``Factor::Log`` record type include the ``&log``
attribute, indicating that these fields should be passed to the
Logging Framework when ``Log::write`` is called. Were there to be
any name value pairs without the ``&log`` attribute, those fields
would simply be ignored during logging but remain available for the
lifespan of the variable. The next step is to create the logging
stream with :bro:id:`Log::create_stream` which takes a Log::ID and a
record as its arguments. In this example, on line 28, we call the
``Log::create_stream`` method and pass ``Factor::LOG`` and the
``Factor::Info`` record as arguments. From here on out, if we issue
the ``Log::write`` command with the correct ``Log::ID`` and a properly
formatted ``Factor::Info`` record, a log entry will be generated.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_02.bro
Now, if we run the new version of the script, instead of generating
logging information to stdout, no output is created. Instead the
output is all in ``factor.log``, properly formatted and organized.
.. btest:: framework_logging_factorial-2
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/framework_logging_factorial_02.bro
@TEST-EXEC: btest-rst-include factor.log
While the previous example is a simplistic one, it serves to
demonstrate the small pieces of script code hat need to be in place in
order to generate logs. For example, it's common to call
``Log::create_stream`` in :bro:id:`bro_init` and while in a live
example, determining when to call ``Log::write`` would likely be
done in an event handler, in this case we use :bro:id:`bro_done` .
If you've already spent time with a deployment of Bro, you've likely
had the opportunity to view, search through, or manipulate the logs
produced by the Logging Framework. The log output from a default
installation of Bro is substantial to say the least, however, there
are times in which the way the Logging Framework by default isn't
ideal for the situation. This can range from needing to log more or
less data with each call to ``Log::write`` or even the need to split
log files based on arbitrary logic. In the later case, Filters come
into play along with the Logging Framework. Filters grant a level of
customization to Bro's scriptland, allowing the script writer to
include or exclude fields in the log and even make alterations to the
path of the file in which the logs are being placed. Each stream,
when created, is given a default filter called, not surprisingly,
``default``. When using the ``default`` filter, every key value pair
with the ``&log`` attribute is written to a single file. For the
example we've been using, let's extend it so as to write any factorial
which is a factor of 5 to an alternate file, while writing the
remaining logs to factor.log.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_03.bro
:lines: 38-62
:linenos:
To dynamically alter the file in which a stream writes its logs a
filter can specify function returns a string to be used as the
filename for the current call to ``Log::write``. The definition for
this function has to take as its parameters a ``Log::ID`` called id, a
string called ``path`` and the appropriate record type for the logs called
``rec``. You can see the definition of ``mod5`` used in this example on
line one conforms to that requirement. The function simply returns
``factor-mod5`` if the factorial is divisible evenly by 5, otherwise, it
returns ``factor-non5``. In the additional ``bro_init`` event
handler, we define a locally scoped ``Log::Filter`` and assign it a
record that defines the ``name`` and ``path_func`` fields. We then
call ``Log::add_filter`` to add the filter to the ``Factor::LOG``
``Log::ID`` and call ``Log::remove_filter`` to remove the ``default``
filter for ``Factor::LOG``. Had we not removed the ``default`` filter,
we'd have ended up with three log files: ``factor-mod5.log`` with all the
factorials that are a factors of 5, ``factor-non5.log`` with the
factorials that are not factors of 5, and ``factor.log`` which would have
included all factorials.
.. btest:: framework_logging_factorial-3
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/framework_logging_factorial_03.bro
@TEST-EXEC: btest-rst-include factor-mod5.log
The ability of Bro to generate easily customizable and extensible logs
which remain easily parsable is a big part of the reason Bro has
gained a large measure of respect. In fact, it's difficult at times
to think of something that Bro doesn't log and as such, it is often
advantageous for analysts and systems architects to instead hook into
the logging framework to be able to perform custom actions based upon
the data being sent to the Logging Frame. To that end, every default
log stream in Bro generates a custom event that can be handled by
anyone wishing to act upon the data being sent to the stream. By
convention these events are usually in the format ``log_x`` where x is
the name of the logging stream; as such the event raised for every log
sent to the Logging Framework by the HTTP parser would be
``log_http``. In fact, we've already seen a script handle the
``log_http`` event when we broke down how the ``detect-MHR.bro``
script worked. In that example, as each log entry was sent to the
logging framework, post-processing was taking place in the
``log_http`` event. Instead of using an external script to parse the
``http.log`` file and do post-processing for the entry,
post-processing can be done in real time in Bro.
Telling Bro to raise an event in your own Logging stream is as simple
as exporting that event name and then adding that event in the call to
``Log::create_stream``. Going back to our simple example of logging
the factorial of an integer, we add ``log_factor`` to the ``export``
block and define the value to be passed to it, in this case the
``Factor::Info`` record. We then list the ``log_factor`` function as
the ``$ev`` field in the call to ``Log::create_stream``
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_04.bro
Raising Notices
===============
While Bro's Logging Framework provides an easy and systematic way to
generate logs, there still exists a need to indicate when a specific
behavior has been detected and a method to allow that detection to
come to someone's attention. To that end, the Notice Framework is in
place to allow script writers a codified means through which they can
raise a notice, as well as a system through which an operator can
opt-in to receive the notice. Bro holds to the philosophy that it is
up to the individual operator to indicate the behaviors in which they
are interested and as such Bro ships with a large number of policy
scripts which detect behavior that may be of interest but it does not
presume to guess as to which behaviors are "action-able". In effect,
Bro works to separate the act of detection and the responsibility of
reporting. With the Notice Framework it's simple to raise a notice
for any behavior that is detected.
To raise a notice in Bro, you only need to indicate to Bro that you
are provide a specific :bro:type:`Notice::Type` by exporting it and then
make a call to :bro:id:`NOTICE` supplying it with an appropriate
:bro:type:`Notice::Info` record. Often times the call to ``NOTICE``
includes just the ``Notice::Type``, and a concise message. There are
however, significantly more options available when raising notices as
seen in the table below. The only field in the table below whose
attributes make it a required field is the ``note`` field. Still,
good manners are always important and including a concise message in
``$msg`` and, where necessary, the contents of the connection record
in ``$conn`` along with the ``Notice::Type`` tend to comprise the
minimum of information required for an notice to be considered useful.
If the ``$conn`` variable is supplied the Notice Framework will
auto-populate the ``$id`` and ``$src`` fields as well. Other fields
that are commonly included, ``$identifier`` and ``$suppress_for`` are
built around the automated suppression feature of the Notice Framework
which we will cover shortly.
.. todo::
Once the link to ``Notice::Info`` work I think we should take out
the table. That's too easy to get out of date.
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| Field | Type | Attributes | Use |
+=====================+==================================================================+================+========================================+
| ts | time | &log &optional | The time of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| uid | string | &log &optional | A unique connection ID |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| id | conn_id | &log &optional | A 4-tuple to identify endpoints |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| conn | connection | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| iconn | icmp_conn | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| proto | transport_proto | &log &optional | Transport protocol |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| note | Notice::Type | &log | The Notice::Type of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| msg | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| sub | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src | addr | &log &optional | Source address if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| dst | addr | &log &optional | Destination addr if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| p | port | &log &optional | Port if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| n | count | &log &optional | Count or status code |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src_peer | event_peer | &log &optional | Peer that raised the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| peer_descr | string | &log &optional | Text description of the src_peer |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| actions | set[Notice::Action] | &log &optional | Actions applied to the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| policy_items | set[count] | &log &optional | Policy items that have been applied |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_body_sections | vector | &optinal | Body of the email for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_delay_tokens | set[string] | &optional | Delay functionality for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| identifier | string | &optional | A unique string identifier |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| suppress_for | interval | &log &optional | Length of time to suppress a notice. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
One of the default policy scripts raises a notice when an SSH login
has been heuristically detected and the originating hostname is one
that would raise suspicion. Effectively, the script attempts to
define a list of hosts from which you would never want to see SSH
traffic originating, like DNS servers, mail servers, etc. To
accomplish this, the script adhere's to the seperation of detection
and reporting by detecting a behavior and raising a notice. Whether
or not that notice is acted upon is decided by the local Notice
Policy, but the script attempts to supply as much information as
possible while staying concise.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssh/interesting-hostnames.bro
:lines: 1-46
While much of the script relates to the actual detection, the parts
specific to the Notice Framework are actually quite interesting in
themselves. On line 12 the script's ``export`` block adds the value
``SSH::Interesting_Hostname_Login`` to the enumerable constant
``Notice::Type`` to indicate to the Bro core that a new type of notice
is being defined. The script then calls ``NOTICE`` and defines the
``$note``, ``$msg``, ``$sub`` and ``$conn`` fields of the
:bro:type:`Notice::Info` record. Line 39 also includes a ternary if
statement that modifies the ``$msg`` text depending on whether the
host is a local address and whether it is the client or the server.
This use of :bro:id:`fmt` and a ternary operators is a concise way to
lend readability to the notices that are generated without the need
for branching ``if`` statements that each raise a specific notice.
The opt-in system for notices is managed through writing
:bro:id:`Notice::policy` hooks. A ``Notice::policy`` hook takes as
its argument a ``Notice::Info`` record which will hold the same
information your script provided in its call to ``NOTICE``. With
access to the ``Notice::Info`` record for a specific notice you can
include logic such as in statements in the body of your hook to alter
the policy for handling notices on your system. In Bro, hooks are
akin to a mix of functions and event handlers: like functions, calls
to them are synchronous (i.e., run to completion and return); but like
events, they can have multiple bodies which will all execute. For
defining a notice policy, you define a hook and Bro will take care of
passing in the ``Notice::Info`` record. The simplest kind of
``Notice::policy`` hooks simply check the value of ``$note`` in the
``Notice::Info`` record being passed into the hook and performing an
action based on the answer. The hook below adds the
:bro:enum:`Notice::ACTION_EMAIL` action for the
``SSH::Interesting_Hostname_Login`` notice raised in the
:doc:`/scripts/policy/protocols/ssh/interesting-hostnames` script.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_hook_01.bro
In the example above we've added ``Notice::ACTION_EMAIL`` to the
``n$actions`` set. This set, defined in the Notice Framework scripts,
can only have entries from the :bro:type:`Notice::Action` type, which is
itself an enumerable that defines the values shown in the table below
along with their corresponding meanings. The
:bro:enum:`Notice::ACTION_LOG` action writes the notice to the
``Notice::LOG`` logging stream which, in the default configuration,
will write each notice to the ``notice.log`` file and take no further
action. The :bro:enum:`Notice::ACTION_EMAIL` action will send an email
to the address or addresses defined in the :bro:id:`Notice::mail_dest`
variable with the particulars of the notice as the body of the email.
The last action, :bro:enum:`Notice::ACTION_ALARM` sends the notice to
the :bro:enum:`Notice::ALARM_LOG` logging stream which is then rotated
hourly and its contents emailed in readable ASCII to the addresses in
``Notice::mail_dest``.
+--------------+-----------------------------------------------------+
| ACTION_NONE | Take no action |
+--------------+-----------------------------------------------------+
| ACTION_LOG | Send the notice to the Notice::LOG logging stream. |
+--------------+-----------------------------------------------------+
| ACTION_EMAIL | Send an email with the notice in the body. |
+--------------+-----------------------------------------------------+
| ACTION_ALARM | Send the notice to the Notice::Alarm_LOG stream. |
+--------------+-----------------------------------------------------+
While actions like the ``Notice::ACTION_EMAIL`` action have appeal for
quick alerts and response, a caveat of its use is to make sure the
notices configured with this action also have a suppression. A
suppression is a means through which notices can be ignored after they
are initially raised if the author of the script has set an
identifier. An identifier is a unique string of information collected
from the connection relative to the behavior that has been observed by
Bro.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
:lines: 59-62
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs` script
which identifies when SSL certificates are set to expire and raises
notices when it crosses a pre-defined threshold, the call to
``NOTICE`` above also sets the ``$identifier`` entry by concatenating
the responder IP, port, and the hash of the certificate. The
selection of responder IP, port and certificate hash fits perfectly
into an appropriate identifier as it creates a unique identifier with
which the suppression can be matched. Were we to take out any of the
entities used for the identifier, for example the certificate hash, we
could be setting our suppression too broadly, causing an analyst to
miss a notice that should have been raised. Depending on the
available data for the identifier, it can be useful to set the
``$suppress_for`` variable as well. The ``expiring-certs.bro`` script
sets ``$suppress_for`` to ``1day``, telling the Notice Framework to
suppress the notice for 24 hours after the first notice is raised.
Once that time limit has passed, another notice can be raised which
will again set the ``1day`` suppression time. Suppressing for a
specific amount of time has benefits beyond simply not filling up an
analyst's email inbox; keeping the notice alerts timely and succinct
helps avoid a case where an analyst might see the notice and, due to
over exposure, ignore it.
The ``$suppress_for`` variable can also be altered in a
``Notice::policy`` hook, allowing a deployment to better suit the
environment in which it is be run. Using the example of
``expiring-certs.bro``, we can write a ``Notice::policy`` hook for
``SSL::Certificate_Expires_Soon`` to configure the ``$suppress_for``
variable to a shorter time.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_hook_suppression_01.bro
While ``Notice::policy`` hooks allow you to build custom
predicate-based policies for a deployment, there are bound to be times
where you don't require the full expressiveness that a hook allows.
In short, there will be notice policy considerations where a broad
decision can be made based on the ``Notice::Type`` alone. To
facilitate these types of decisions, the Notice Framework supports
Notice Policy shortcuts. These shortcuts are implemented through the
means of a group of data structures that map specific, pre-defined
details and actions to the effective name of a notice. Primarily
implemented as a set or table of enumerables of :bro:type:`Notice::Type`,
Notice Policy shortcuts can be placed as a single directive in your
``local.bro`` file as a concise readable configuration. As these
variables are all constants, it bears mentioning that these variables
are all set at parse-time before Bro is fully up and running and not
set dynamically.
+------------------------------------+-----------------------------------------------------+-------------------------------------+
| Name | Description | Data Type |
+====================================+=====================================================+=====================================+
| Notice::ignored_types | Ignore the Notice::Type entirely | set[Notice::Type] |
+------------------------------------+-----------------------------------------------------+-------------------------------------+
| Notice::emailed_types | Set Notice::ACTION_EMAIL to this Notice::Type | set[Notice::Type] |
+------------------------------------+-----------------------------------------------------+-------------------------------------+
| Notice::alarmed_types | Set Notice::ACTION_ALARM to this Notice::Type | set[Notice::Type] |
+------------------------------------+-----------------------------------------------------+-------------------------------------+
| Notice::not_suppressed_types | Remove suppression from this Notice::Type | set[Notice::Type] |
+------------------------------------+-----------------------------------------------------+-------------------------------------+
| Notice::type_suppression_intervals | Alter the $suppress_for value for this Notice::Type | table[Notice::Type] of interval |
+------------------------------------+-----------------------------------------------------+-------------------------------------+
The table above details the five Notice Policy shortcuts, their
meaning and the data type used to implement them. With the exception
of ``Notice::type_suppression_intervals`` a ``set`` data type is
employed to hold the ``Notice::Type`` of the notice upon which a
shortcut should applied. The first three shortcuts are fairly self
explanatory, applying an action to the ``Notice::Type`` elements in
the set, while the latter two shortcuts alter details of the
suppression being applied to the Notice. The shortcut
``Notice::not_suppressed_types`` can be used to remove the configured
suppression from a notice while ``Notice::type_suppression_intervals``
can be used to alter the suppression interval defined by $suppress_for
in the call to ``NOTICE``.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_01.bro
The Notice Policy shortcut above adds the ``Notice::Types`` of
SSH::Interesting_Hostname_Login and SSH::Login to the
Notice::emailed_types set while the shortcut below alters the length
of time for which those notices will be suppressed.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_02.bro