Pass over the Using Bro section.

I edited the text little bit, reorganized the structure somewhat and
extended some parts. I've also simplified the tests a bit, using some
of the BTest tweaks commited in parallel.
This commit is contained in:
Robin Sommer 2013-08-22 15:55:44 -07:00
parent 1e9227a9e9
commit 399899c49b
23 changed files with 296 additions and 120 deletions

@ -1 +1 @@
Subproject commit 69606f8f3cc84d694ca1da14868a5fecd4abbc96 Subproject commit 063b05562172c9cae160cf8df899afe366d1b108

View file

@ -1,4 +1,6 @@
.. _framework-logging:
================= =================
Logging Framework Logging Framework
================= =================

View file

@ -5,97 +5,155 @@
Using Bro Using Bro
========= =========
.. contents::
Once Bro has been deployed in an environment and monitoring live Once Bro has been deployed in an environment and monitoring live
traffic, it will, in its default configuration, begin to produce traffic, it will, in its default configuration, begin to produce
human-readable ASCII logs. Each log file, produced by Bro's Logging human-readable ASCII logs. Each log file, produced by Bro's
Framework, is populated with organized, connection-oriented data. As :ref:`framework-logging`, is populated with organized, mostly
the log files are simple ASCII data, working with the data contained connection-oriented data. As the standard log files are simple ASCII
in them can be done from a command line terminal once you have been data, working with the data contained in them can be done from a
familiarized with the types of data that can be found in each log command line terminal once you have been familiarized with the types
file. of data that can be found in each file. In the following, we work
through the logs general structure and then examine some standard ways
of working with them.
---------------------- ----------------------
Structure of Log Files Working with Log Files
---------------------- ----------------------
The log files produced by Bro adhere to a structure as defined by the Generally, all of Bro's log files are produced by a corresponding
scripts that produced through which they were produced. However, as script that defines their individual structure. However, as each log
each log file has been produced using the Logging Framework, there are file flows through the Logging Framework, there share a set of
similarities shared by each log file. Without breaking into the structural similarities. Without breaking into the scripting aspect of
scripting aspect of Bro, a bird's eye view of how the log files are Bro here, a bird's eye view of how the log files are produced would
produced would progress as follows. The script's author defines the progress as follows. The script's author defines the kinds of data,
kinds of data, such as the originating IP address or the duration of a such as the originating IP address or the duration of a connection,
connection, which will be used as fields in the log file. The author which will make up the fields (i.e., columns) of the log file. The
then decides what behavior should generate a log file entry, these author then decides what network activity should generate a single log
behaviors can range from a connection having been completed or an HTTP file entry (i.e., one line); that could, e.g., be a connection having
GET method being issued by an originator. Once these behaviors have been completed or an HTTP ``GET`` method being issued by an
been observed, the data is passed to the Logging Framework which, in originator. When these behaviors are observed during operation, the
turn, adds an entry to the appropriate log file. While the fields of data is passed to the Logging Framework which, in turn, adds the entry
the log entries can be modified by the user, the Logging Framework to the appropriate log file.
makes use of a header entry in each log file to ensure that it remains
self-describing. This header entry can be see by running the unix
utility ``head`` and outputting the first eight lines of the file.
.. btest:: using_bro_cmd_line_01 As the fields of the log entries can be further customized by the
user, the Logging Framework makes use of a header block to ensure that
it remains self-describing. This header entry can be see by running
the Unix utility ``head`` and outputting the first lines of the file:
@TEST-EXEC: btest-rst-cmd head -8 ${TESTBASE}/Baseline/core.pppoe/conn.log .. btest:: using_bro
The sample above shows the header for a ``conn.log`` file which gives @TEST-EXEC: btest-rst-cmd "bro -r $TRACES/wikipedia.trace && head -15 conn.log"
a detailed account of each connection as seen by Bro. As you can see,
header includes information such as what separators are being used for As you can see, the header consists of lines prefixed by ``#`` and
includes information such as what separators are being used for
various types of data, what an empty field looks like and what an various types of data, what an empty field looks like and what an
unset field looks like. In this example, the default TAB separator is unset field looks like. In this example, the default TAB separator is
being used as the delimiter between fiends (\x09 is the tab character being used as the delimiter between fields (``\x09`` is the tab
in hex). It also lists the comma as the separator for set data, the character in hex). It also lists the comma as the separator for set
string "(empty)" as the indicator for an empty field and the '-' data, the string ``(empty)`` as the indicator for an empty field and
character as the indicator for a field that hasn't been set. The the ``-`` character as the indicator for a field that hasn't been set.
timestamp for when the file was created is included under "#open". The timestamp for when the file was created is included under
The header then goes on to detail the fields being listed in the file ``#open``. The header then goes on to detail the fields being listed
and the data types of those fields in #fields and #types respectively. in the file and the data types of those fields in ``#fields`` and
These two entries are often the two most significant points of ``#types``, respectively. These two entries are often the two most
interest as they detail not only the field name but the data type significant points of interest as they detail not only the field names
used. Navigating through the different log files produced by Bro, but the data types used. When navigating through the different log
often requires the use of different elements of the unix tool chain files with tools like ``sed``, ``awk``, or ``grep``, having the field
such as ``sed``, ``awk``, or ``grep`` and having the field definitions definitions readily available saves the user some mental leg work. The
readily available will save the user some mental leg work. The field field names are also a key resource for using the :ref:`bro-cut
names are also a key resource for using the ``bro-cut`` utility <bro-cut>` utility included with Bro, see below.
included with Bro.
------------- Next to the header follows the main content; in this example we see 7
Using bro-cut connections with their key properties, such as originator and
------------- responder IP addresses (note how Bro transparely handles both IPv4 and
IPv6), transport-layer ports, application-layer services - the
``service`` field is filled ias Bro determines a specific protocol to
be in use, independent of the connection's ports - payload size, and
more. See :bro:type:`Conn::Info` for a description of all fields.
In addition to ``conn.log``, Bro generates many further logs by
default, including:
``dpd.log``
A summary of protocols encountered on non-standard ports.
``dns.log``
All DNS activity.
``ftp.log``
A log of FTP session-level activity.
``files.log``
Summaries of files transfered over the network. This information
is aggregrated from different protocols, including HTTP, FTP, and
SMTP.
``http.log``
A summary of all HTTP requests with their replies.
``known_certs.log``
SSL certificates seen in use.
``smtp.log``
A summary of SMTP activity.
``ssl.log``
A record of SSL sessions, including certificates being used.
``weird.log``
A log of unexpected protocol-level activity. Whenever Bro's
protocol analysis encounters a situation it would not expect
(e.g., an RFC violation) is logs it in this file. Note that in
practice, real-world networks tend to exhibit a large number of
such "crud" that is usually not worth following up on.
As you can see, some log files are specific to a particular protocol,
while others aggregate information across different types of activity.
.. _bro-cut:
Using ``bro-cut``
-----------------
The ``bro-cut`` utility can be used in place of other tools to build The ``bro-cut`` utility can be used in place of other tools to build
terminal commands that remain flexible and accurate independent of terminal commands that remain flexible and accurate independent of
possible changes that can be made to the log file itself. It possible changes to log file itself. It accomplishes this by parsing
accomplishes this by parsing the header in each file and allowing the the header in each file and allowing the user to refer to the specific
user to refer to the specific columnar data available. In contrast columnar data available (in contrast to tools like ``awk`` that
tools like ``awk`` require the user to refer to fields referenced by require the user to refer to fields referenced by their position).
their position. For example, the two commands listed below produce For example, the following command extracts just the given columns
the same output given a default configuration of Bro. from a ``conn.log``:
.. btest:: using_bro_bro_cut_01 .. btest:: using_bro
@TEST-EXEC: btest-rst-cmd awk \'{print \$3, \$4, \$5, \$6, \$9}\' ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_01/conn.log @TEST-EXEC: btest-rst-cmd -n 10 "cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration"
.. btest:: using_bro_bro_cut_02 The correspding ``awk`` command would look like this:
@TEST-EXEC: cat ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_01/conn.log | btest-rst-cmd -c "cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration " bro-cut id.orig_h id.orig_p id.resp_h duration .. btest:: using_bro
@TEST-EXEC: btest-rst-cmd -n 10 awk \'/^[^#]/ {print \$3, \$4, \$5, \$6, \$9}\' conn.log
While the output is similar, the advantages to using bro-cut over awk While the output is similar, the advantages to using bro-cut over
lay in that, while awk is flexible and powerful, ``bro-cut`` was ``awk`` lay in that, while ``awk`` is flexible and powerful, ``bro-cut``
specifically designed to work with log files. Firstly, the was specifically designed to work with Bro's log files. Firstly, the
``bro-cut`` output includes only the log file entries, while the ``bro-cut`` output includes only the log file entries, while the
``awk`` output includes the header parts of the log file, which would ``awk`` solution needs to skip the header manually. Secondly, since
require the user to use a secondary utility to suppress those lines. ``bro-cut`` uses the field descriptors to identify and extract data,
Secondly, since ``bro-cut`` uses the field descriptors to identify and it allows for flexibility independent of the format and contents of
extract data, it allows for flexibility independent of the format and the log file. It's not uncommon for a Bro configuration to add extra
contents of the log file. It's not uncommon for a Bro configuration fields to various log files as required by the environment. In this
to add extra fields to various log files as required by the case, the fields in the ``awk`` command would have to be altered to
environment. In this case, the fields in the ``awk`` command would compensate for the new position whereas the ``bro-cut`` output would
have to be altered to compensate for the new position whereas the not change.
``bro-cut`` output would not change.
.. note::
The sequence of field names given to ``bro-cut`` determines the
output order, which means you can also use ``bro-cut`` to reorder
fields. That can be helpful when piping into, e.g., ``sort``.
As you may have noticed, the command for ``bro-cut`` uses the output As you may have noticed, the command for ``bro-cut`` uses the output
redirection through the ``cat`` command and ``|`` operator. Whereas redirection through the ``cat`` command and ``|`` operator. Whereas
@ -104,51 +162,60 @@ line option, bro-cut only takes input through redirection such as
``|`` and ``<``. There are a couple of ways to direct log file data ``|`` and ``<``. There are a couple of ways to direct log file data
into ``bro-cut``, each dependent upon the type of log file you're into ``bro-cut``, each dependent upon the type of log file you're
processing. A caveat of its use, however, is that the 8 lines of processing. A caveat of its use, however, is that the 8 lines of
header data must be present. In its default setup, Bro will rotate header data must be present.
log files on an hourly basis, moving the current log file into a
directory with format ``YYYY-MM-DD`` and gzip compressing the file
with a file format that includes the log file type and time range of
the file. In the case of processing a compressed log file you simply
adjust your command line tools to use the complementary z* versions of
commands such as cat (``zcat``), ``grep`` (``zgrep``), and ``head``
(``zhead``).
....................... .. note::
Working with timestamps
.......................
The ``bro-cut`` accepts the flag ``-d`` to convert the epoch time ``bro-cut`` provides an option ``-c`` to include a corresponding
values in the log files to human-readable format. The following format header into the output, which allows to chain multiple
command includes the human readable time stamp, the unique identifier ``bro-cut`` instances or perform further post-processing that
and the HTTP host and HTTP uri as parsed from the ``http.log`` file. evaluates the header information.
.. btest:: using_bro_bro_cut_time_01 In its default setup, Bro will rotate log files on an hourly basis,
moving the current log file into a directory with format
``YYYY-MM-DD`` and gzip compressing the file with a file format that
includes the log file type and time range of the file. In the case of
processing a compressed log file you simply adjust your command line
tools to use the complementary ``z*`` versions of commands such as cat
(``zcat``), ``grep`` (``zgrep``), and ``head`` (``zhead``).
@TEST-EXEC: btest-rst-cmd -c "bro-cut -d ts uid host uri < http.log" bro-cut -d ts uid host uri < ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_01/http.log Working with Timestamps
-----------------------
``bro-cut`` accepts the flag ``-d`` to convert the epoch time values
in the log files to human-readable format. The following command
includes the human readable time stamp, the unique identifier and the
HTTP ``Host`` and HTTP ``URI`` as extracted from the ``http.log``
file:
.. btest:: using_bro
@TEST-EXEC: btest-rst-cmd -n 5 "bro-cut -d ts uid host uri < http.log"
Often times log files from multiple sources are stored in UTC time to Often times log files from multiple sources are stored in UTC time to
allow easy correlation. Converting the timestamp from a log file to allow easy correlation. Converting the timestamp from a log file to
UTC can be accomplished with the ``-u`` command. UTC can be accomplished with the ``-u`` option:
.. btest:: using_bro_bro_cut_time_02 .. btest:: using_bro
@TEST-EXEC: btest-rst-cmd -c "bro-cut -u ts uid host uri < http.log" bro-cut -u ts uid host uri < ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_01/http.log @TEST-EXEC: btest-rst-cmd -n 5 "bro-cut -u ts uid host uri < http.log"
The default time format when using the ``-d`` or ``-u`` is the The default time format when using the ``-d`` or ``-u`` is the
``strftime`` format string %Y-%m-%dT%H:%M:%S%z which results in a ``strftime`` format string ``%Y-%m-%dT%H:%M:%S%z`` which results in a
string with year, month, day of month, followed by hour, minutes, string with year, month, day of month, followed by hour, minutes,
seconds and the timezone offset. The default ``strftime`` can be seconds and the timezone offset. The default format can be altered by
altered by using the ``-D`` and ``-U`` flags. For example, to format using the ``-D`` and ``-U`` flags, using the standard ``strftime``
the timestamp in the US-typical "Middle Endian" you could use a format syntax. For example, to format the timestamp in the US-typical "Middle
string of: %d-%m-%YT%H:%M:%S%z Endian" you could use a format string of: ``%d-%m-%YT%H:%M:%S%z``
.. btest:: using_bro_bro_cut_time_03 .. btest:: using_bro
@TEST-EXEC: btest-rst-cmd -c "bro-cut -D %d-%m-%YT%H:%M:%S%z ts uid host uri < http.log" bro-cut -D %d-%m-%YT%H:%M:%S%z ts uid host uri < ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_01/http.log @TEST-EXEC: btest-rst-cmd -n 5 "bro-cut -D %d-%m-%YT%H:%M:%S%z ts uid host uri < http.log"
---------------------- See ``man strfime`` for more options for the format string.
Working with Log Files
---------------------- Using UIDs
----------
While Bro can do signature based analysis, its primary focus is on While Bro can do signature based analysis, its primary focus is on
behavioral detection which alters the practice of log review from behavioral detection which alters the practice of log review from
@ -156,8 +223,8 @@ behavioral detection which alters the practice of log review from
trip. A common progression of review includes correlating a session trip. A common progression of review includes correlating a session
across multiple log files. As a connection is processed by Bro, a across multiple log files. As a connection is processed by Bro, a
unique identifier is assigned to each session. This unique identifier unique identifier is assigned to each session. This unique identifier
is almost always included in any log file entry specific to that is generally included in any log file entry associated with that
connection and can be used to cross-reference log files. connection and can be used to cross-reference different log files.
A simple example would be to cross-reference a UID seen in a A simple example would be to cross-reference a UID seen in a
``conn.log`` file. Here, we're looking for the connection with the ``conn.log`` file. Here, we're looking for the connection with the
@ -165,19 +232,21 @@ largest number of bytes from the responder by redirecting the output
for ``cat conn.log`` into bro-cut to extract the UID and the for ``cat conn.log`` into bro-cut to extract the UID and the
resp_bytes, then sorting that output by the resp_bytes field. resp_bytes, then sorting that output by the resp_bytes field.
.. btest:: using_bro_practical_02 .. btest:: using_bro
@TEST-EXEC: cat ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_02/conn.log | bro-cut uid resp_bytes | btest-rst-cmd -c "cat conn.log | bro-cut uid resp_bytes | btest-rst-cmd sort -nrk2" sort -nrk2 @TEST-EXEC: btest-rst-cmd "cat conn.log | bro-cut uid resp_bytes | sort -nrk2 | head -5"
With the UID of the largest response, it can be crossreferenced with Taking the UID of the first of the top responses, we can now
the UIDs in the ``http.log`` file. crossreference that with the UIDs in the ``http.log`` file.
.. btest:: using_bro_practical_03 .. btest:: using_bro
@TEST-EXEC: cat ${TESTBASE}/Baseline/doc.manual.using_bro_sandbox_02/http.log | bro-cut uid id.resp_h method status_code host uri | btest-rst-cmd -c "cat http.log | bro-cut uid id.resp_h method status_code host uri | grep j4u32Pc5bif" grep j4u32Pc5bif @TEST-EXEC: btest-rst-cmd "cat http.log | bro-cut uid id.resp_h method status_code host uri | grep VW0XPVINV8a"
As you can see there are two HTTP ``GET`` requests within the
session that Bro identified and logged. Given that HTTP is a stream
protocol, it can have multiple ``GET``/``POST``/etc requests in a
stream and Bro is able to extract and track that information for you,
giving you an in-depth and structured view into HTTP traffic on your
network.
As you can see there are multiple HTTP GET requests within the session
that Bro identified and logged. Given that HTTP is a stream protocol,
it can have multiple GET/POST/etc requests in a stream and Bro is able
to extract and track that information for you, giving you an in-depth
and structured view into HTTP traffic on your network.

View file

@ -0,0 +1,19 @@
.. code-block:: none
# bro -r wikipedia.trace && head -15 conn.log
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open 2013-08-22-22-52-46
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
1300475167.096535 UWkUyAuUGXf 141.142.220.202 5353 224.0.0.251 5353 udp dns - - - S0 - 0 D 1 73 0 0 (empty)
1300475167.097012 arKYeMETxOg fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp - - - - S0 - 0 D 1 199 0 0 (empty)
1300475167.099816 k6kgXLOoSKl 141.142.220.50 5353 224.0.0.251 5353 udp - - - - S0 - 0 D 1 179 0 0 (empty)
1300475168.853899 TEfuqmmG4bh 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF - 0 Dd 1 66 1 117 (empty)
1300475168.854378 FrJExwHcSal 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF - 0 Dd 1 80 1 127 (empty)
1300475168.854837 5OKnoww6xl4 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF - 0 Dd 1 66 1 211 (empty)
1300475168.857956 fRFu0wcOle6 141.142.220.118 32902 141.142.2.2 53 udp dns 0.000317 38 89 SF - 0 Dd 1 66 1 117 (empty)

View file

@ -0,0 +1,15 @@
.. code-block:: none
# cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration
141.142.220.202 5353 224.0.0.251 -
fe80::217:f2ff:fed7:cf65 5353 ff02::fb -
141.142.220.50 5353 224.0.0.251 -
141.142.220.118 43927 141.142.2.2 0.000435
141.142.220.118 37676 141.142.2.2 0.000420
141.142.220.118 40526 141.142.2.2 0.000392
141.142.220.118 32902 141.142.2.2 0.000317
141.142.220.118 59816 141.142.2.2 0.000343
141.142.220.118 59714 141.142.2.2 0.000375
141.142.220.118 58206 141.142.2.2 0.000339
[...]

View file

@ -0,0 +1,15 @@
.. code-block:: none
# awk '/^[^#]/ {print $3, $4, $5, $6, $9}' conn.log
141.142.220.202 5353 224.0.0.251 5353 -
fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 -
141.142.220.50 5353 224.0.0.251 5353 -
141.142.220.118 43927 141.142.2.2 53 0.000435
141.142.220.118 37676 141.142.2.2 53 0.000420
141.142.220.118 40526 141.142.2.2 53 0.000392
141.142.220.118 32902 141.142.2.2 53 0.000317
141.142.220.118 59816 141.142.2.2 53 0.000343
141.142.220.118 59714 141.142.2.2 53 0.000375
141.142.220.118 58206 141.142.2.2 53 0.000339
[...]

View file

@ -0,0 +1,10 @@
.. code-block:: none
# bro-cut -d ts uid host uri < http.log
2011-03-18T19:06:08+0000 j4u32Pc5bif bits.wikimedia.org /skins-1.5/monobook/main.css
2011-03-18T19:06:08+0000 VW0XPVINV8a upload.wikimedia.org /wikipedia/commons/6/63/Wikipedia-logo.png
2011-03-18T19:06:08+0000 3PKsZ2Uye21 upload.wikimedia.org /wikipedia/commons/thumb/b/bb/Wikipedia_wordmark.svg/174px-Wikipedia_wordmark.svg.png
2011-03-18T19:06:08+0000 GSxOnSLghOa upload.wikimedia.org /wikipedia/commons/b/bd/Bookshelf-40x201_6.png
2011-03-18T19:06:08+0000 Tw8jXtpTGu6 upload.wikimedia.org /wikipedia/commons/thumb/8/8a/Wikinews-logo.png/35px-Wikinews-logo.png
[...]

View file

@ -0,0 +1,10 @@
.. code-block:: none
# bro-cut -u ts uid host uri < http.log
2011-03-18T19:06:08+0000 j4u32Pc5bif bits.wikimedia.org /skins-1.5/monobook/main.css
2011-03-18T19:06:08+0000 VW0XPVINV8a upload.wikimedia.org /wikipedia/commons/6/63/Wikipedia-logo.png
2011-03-18T19:06:08+0000 3PKsZ2Uye21 upload.wikimedia.org /wikipedia/commons/thumb/b/bb/Wikipedia_wordmark.svg/174px-Wikipedia_wordmark.svg.png
2011-03-18T19:06:08+0000 GSxOnSLghOa upload.wikimedia.org /wikipedia/commons/b/bd/Bookshelf-40x201_6.png
2011-03-18T19:06:08+0000 Tw8jXtpTGu6 upload.wikimedia.org /wikipedia/commons/thumb/8/8a/Wikinews-logo.png/35px-Wikinews-logo.png
[...]

View file

@ -0,0 +1,10 @@
.. code-block:: none
# bro-cut -D %d-%m-%YT%H:%M:%S%z ts uid host uri < http.log
18-03-2011T19:06:08+0000 j4u32Pc5bif bits.wikimedia.org /skins-1.5/monobook/main.css
18-03-2011T19:06:08+0000 VW0XPVINV8a upload.wikimedia.org /wikipedia/commons/6/63/Wikipedia-logo.png
18-03-2011T19:06:08+0000 3PKsZ2Uye21 upload.wikimedia.org /wikipedia/commons/thumb/b/bb/Wikipedia_wordmark.svg/174px-Wikipedia_wordmark.svg.png
18-03-2011T19:06:08+0000 GSxOnSLghOa upload.wikimedia.org /wikipedia/commons/b/bd/Bookshelf-40x201_6.png
18-03-2011T19:06:08+0000 Tw8jXtpTGu6 upload.wikimedia.org /wikipedia/commons/thumb/8/8a/Wikinews-logo.png/35px-Wikinews-logo.png
[...]

View file

@ -0,0 +1,9 @@
.. code-block:: none
# cat conn.log | bro-cut uid resp_bytes | sort -nrk2 | head -5
VW0XPVINV8a 734
Tw8jXtpTGu6 734
GSxOnSLghOa 734
0Q4FH8sESw5 734
P654jzLoe3a 733

View file

@ -0,0 +1,6 @@
.. code-block:: none
# cat http.log | bro-cut uid id.resp_h method status_code host uri | grep VW0XPVINV8a
VW0XPVINV8a 208.80.152.3 GET 304 upload.wikimedia.org /wikipedia/commons/6/63/Wikipedia-logo.png
VW0XPVINV8a 208.80.152.3 GET 304 upload.wikimedia.org /wikipedia/commons/thumb/f/fa/Wikibooks-logo.svg/35px-Wikibooks-logo.svg.png

View file

@ -4,7 +4,7 @@ TmpDir = %(testbase)s/.tmp
BaselineDir = %(testbase)s/Baseline BaselineDir = %(testbase)s/Baseline
IgnoreDirs = .svn CVS .tmp IgnoreDirs = .svn CVS .tmp
IgnoreFiles = *.tmp *.swp #* *.trace .DS_Store IgnoreFiles = *.tmp *.swp #* *.trace .DS_Store
Finalizer = btest-diff-rst PartFinalizer = btest-diff-rst
[environment] [environment]
BROPATH=`bash -c %(testbase)s/../../build/bro-path-dev` BROPATH=`bash -c %(testbase)s/../../build/bro-path-dev`
@ -18,6 +18,7 @@ TRACES=%(testbase)s/Traces
SCRIPTS=%(testbase)s/../scripts SCRIPTS=%(testbase)s/../scripts
DIST=%(testbase)s/../.. DIST=%(testbase)s/../..
BUILD=%(testbase)s/../../build BUILD=%(testbase)s/../../build
TEST_DIFF_CANONIFIER=$SCRIPTS/diff-canonifier TEST_DIFF_CANONIFIER=%(testbase)s/../scripts/diff-canonifier
TMPDIR=%(testbase)s/.tmp TMPDIR=%(testbase)s/.tmp
BRO_PROFILER_FILE=%(testbase)s/.tmp/script-coverage.XXXXXX BRO_PROFILER_FILE=%(testbase)s/.tmp/script-coverage.XXXXXX
BTEST_RST_FILTER=$SCRIPTS/rst-filter

View file

@ -1,3 +0,0 @@
@TEST-COPY-FILE: ${TRACES}/wikipedia.trace
@TEST-EXEC: btest-rst-cmd bro -r wikipedia.trace
@TEST-EXEC: btest-rst-cmd "cat http.log | bro-cut ts id.orig_h | head -5"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd "bro -r $TRACES/wikipedia.trace && head -15 conn.log"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd -n 10 "cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd -n 10 awk \'/^[^#]/ {print \$3, \$4, \$5, \$6, \$9}\' conn.log

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd -n 5 "bro-cut -d ts uid host uri < http.log"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd -n 5 "bro-cut -u ts uid host uri < http.log"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd -n 5 "bro-cut -D %d-%m-%YT%H:%M:%S%z ts uid host uri < http.log"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd "cat conn.log | bro-cut uid resp_bytes | sort -nrk2 | head -5"

View file

@ -0,0 +1 @@
@TEST-EXEC: btest-rst-cmd "cat http.log | bro-cut uid id.resp_h method status_code host uri | grep VW0XPVINV8a"

View file

@ -11,4 +11,4 @@ fi
# The first sed uses a "basic" regexp, the 2nd a "modern:. # The first sed uses a "basic" regexp, the 2nd a "modern:.
sed 's/[0-9]\{10\}\.[0-9]\{2,8\}/XXXXXXXXXX.XXXXXX/g' | \ sed 's/[0-9]\{10\}\.[0-9]\{2,8\}/XXXXXXXXXX.XXXXXX/g' | \
$sed 's/^#(open|close).(19|20)..-..-..-..-..-..$/#\1 XXXX-XX-XX-XX-XX-XX/g' $sed 's/^ *#(open|close).(19|20)..-..-..-..-..-..$/#\1 XXXX-XX-XX-XX-XX-XX/g'

5
testing/scripts/rst-filter Executable file
View file

@ -0,0 +1,5 @@
# /usr/bin/env bash
#
# Filters the output of btest-rst-cmd.
sed "s#${TRACES}/\{0,1\}##g"