mirror of
https://github.com/zeek/zeek.git
synced 2025-10-10 02:28:21 +00:00
Merge remote-tracking branch 'origin/master' into topic/johanna/openflow
This commit is contained in:
commit
eb9fbd1258
93 changed files with 1289 additions and 544 deletions
104
CHANGES
104
CHANGES
|
@ -1,4 +1,108 @@
|
|||
|
||||
2.4-20 | 2015-07-03 10:40:21 -0700
|
||||
|
||||
* Adding a weird for when truncated packets lead TCP reassembly to
|
||||
ignore content. (Robin Sommer)
|
||||
|
||||
2.4-19 | 2015-07-03 09:04:54 -0700
|
||||
|
||||
* A set of tests exercising IP defragmentation and TCP reassembly.
|
||||
(Robin Sommer)
|
||||
|
||||
2.4-17 | 2015-06-28 13:02:41 -0700
|
||||
|
||||
* BIT-1314: Add detection for Quantum Insert attacks. The TCP
|
||||
reassembler can now keep a history of old TCP segments using the
|
||||
tcp_max_old_segments option. An overlapping segment with different
|
||||
data will then generate an rexmit_inconsistency event. The default
|
||||
for tcp_max_old_segments is zero, which disabled any additional
|
||||
buffering. (Yun Zheng Hu/Robin Sommer)
|
||||
|
||||
2.4-14 | 2015-06-28 12:30:12 -0700
|
||||
|
||||
* BIT-1400: Allow '<' and '>' in MIME multipart boundaries. The spec
|
||||
doesn't actually seem to permit these, but they seem to occur in
|
||||
the wild. (Jon Siwek)
|
||||
|
||||
2.4-12 | 2015-06-28 12:21:11 -0700
|
||||
|
||||
* BIT-1399: Trying to decompress deflated HTTP content even when
|
||||
zlib headers are missing. (Seth Hall)
|
||||
|
||||
2.4-10 | 2015-06-25 07:11:17 -0700
|
||||
|
||||
* Correct a name used in a header identifier (Justin Azoff)
|
||||
|
||||
2.4-8 | 2015-06-24 07:50:50 -0700
|
||||
|
||||
* Restore the --load-seeds cmd-line option and enable the short
|
||||
options -G/-H for --load-seeds/--save-seeds. (Daniel Thayer)
|
||||
|
||||
2.4-6 | 2015-06-19 16:26:40 -0700
|
||||
|
||||
* Generate protocol confirmations for Modbus, making it appear as a
|
||||
confirmed service in conn.log. (Seth Hall)
|
||||
|
||||
* Put command line options in alphabetical order. (Daniel Thayer)
|
||||
|
||||
* Removing dead code for no longer supported -G switch. (Robin
|
||||
Sommer) (Robin Sommer)
|
||||
|
||||
2.4 | 2015-06-09 07:30:53 -0700
|
||||
|
||||
* Release 2.4.
|
||||
|
||||
* Fixing tiny thing in NEWS. (Robin Sommer)
|
||||
|
||||
2.4-beta-42 | 2015-06-08 09:41:39 -0700
|
||||
|
||||
* Fix reporter errors with GridFTP traffic. (Robin Sommer)
|
||||
|
||||
2.4-beta-40 | 2015-06-06 08:20:52 -0700
|
||||
|
||||
* PE Analyzer: Change how we calculate the rva_table size. (Vlad Grigorescu)
|
||||
|
||||
2.4-beta-39 | 2015-06-05 09:09:44 -0500
|
||||
|
||||
* Fix a unit test to check for Broker requirement. (Jon Siwek)
|
||||
|
||||
2.4-beta-38 | 2015-06-04 14:48:37 -0700
|
||||
|
||||
* Test for Broker termination. (Robin Sommer)
|
||||
|
||||
2.4-beta-37 | 2015-06-04 07:53:52 -0700
|
||||
|
||||
* BIT-1408: Improve I/O loop and Broker IOSource. (Jon Siwek)
|
||||
|
||||
2.4-beta-34 | 2015-06-02 10:37:22 -0700
|
||||
|
||||
* Add signature support for F4M files. (Seth Hall)
|
||||
|
||||
2.4-beta-32 | 2015-06-02 09:43:31 -0700
|
||||
|
||||
* A larger set of documentation updates, fixes, and extentions.
|
||||
(Daniel Thayer)
|
||||
|
||||
2.4-beta-14 | 2015-06-02 09:16:44 -0700
|
||||
|
||||
* Add memleak btest for attachments over SMTP. (Vlad Grigorescu)
|
||||
|
||||
* BIT-1410: Fix flipped tx_hosts and rx_hosts in files.log. Reported
|
||||
by Ali Hadi. (Vlad Grigorescu)
|
||||
|
||||
* Updating the Mozilla root certs. (Seth Hall)
|
||||
|
||||
* Updates for the urls.bro script. Fixes BIT-1404. (Seth Hall)
|
||||
|
||||
2.4-beta-6 | 2015-05-28 13:20:44 -0700
|
||||
|
||||
* Updating submodule(s).
|
||||
|
||||
2.4-beta-2 | 2015-05-26 08:58:37 -0700
|
||||
|
||||
* Fix segfault when DNS is not available. Addresses BIT-1387. (Frank
|
||||
Meier and Robin Sommer)
|
||||
|
||||
2.4-beta | 2015-05-07 21:55:31 -0700
|
||||
|
||||
* Release 2.4-beta.
|
||||
|
|
2
COPYING
2
COPYING
|
@ -1,4 +1,4 @@
|
|||
Copyright (c) 1995-2013, The Regents of the University of California
|
||||
Copyright (c) 1995-2015, The Regents of the University of California
|
||||
through the Lawrence Berkeley National Laboratory and the
|
||||
International Computer Science Institute. All rights reserved.
|
||||
|
||||
|
|
4
NEWS
4
NEWS
|
@ -4,8 +4,8 @@ release. For an exhaustive list of changes, see the ``CHANGES`` file
|
|||
(note that submodules, such as BroControl and Broccoli, come with
|
||||
their own ``CHANGES``.)
|
||||
|
||||
Bro 2.4 (in progress)
|
||||
=====================
|
||||
Bro 2.4
|
||||
=======
|
||||
|
||||
New Functionality
|
||||
-----------------
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
2.4-beta
|
||||
2.4-20
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit a2d290a832c35ad11f3fabb19812bcae2ff089cd
|
||||
Subproject commit 6d6679506d8762ddbba16f0b34f7ad253e3aac45
|
|
@ -1 +1 @@
|
|||
Subproject commit 97c17d21725e42b36f4b49579077ecdc28ddb86a
|
||||
Subproject commit 54377d4746e2fd3ba7b7ca97e4a6ceccbd2cc236
|
|
@ -1 +1 @@
|
|||
Subproject commit b02fefd5cf78c1576e59c106f5211ce5ae47cfdd
|
||||
Subproject commit f303cdbc60ad6eef35ebcd1473ee85b3123f5ef1
|
|
@ -1 +1 @@
|
|||
Subproject commit 80b42ee3e4503783b6720855b28e83ff1658c22b
|
||||
Subproject commit 0e2da116a5e29baacaecc6daac7bc4bc9ff387c5
|
|
@ -1 +1 @@
|
|||
Subproject commit e1ea9f67cfe3d6a81e0c1479ced0b9aa73e77c87
|
||||
Subproject commit 99d7519991b41a970809a99433ea9c7df42e9d93
|
1
doc/components/bro-plugins/README.rst
Symbolic link
1
doc/components/bro-plugins/README.rst
Symbolic link
|
@ -0,0 +1 @@
|
|||
../../../aux/plugins/README
|
1
doc/components/bro-plugins/dataseries/README.rst
Symbolic link
1
doc/components/bro-plugins/dataseries/README.rst
Symbolic link
|
@ -0,0 +1 @@
|
|||
../../../../aux/plugins/dataseries/README
|
1
doc/components/bro-plugins/elasticsearch/README.rst
Symbolic link
1
doc/components/bro-plugins/elasticsearch/README.rst
Symbolic link
|
@ -0,0 +1 @@
|
|||
../../../../aux/plugins/elasticsearch/README
|
1
doc/components/bro-plugins/netmap/README.rst
Symbolic link
1
doc/components/bro-plugins/netmap/README.rst
Symbolic link
|
@ -0,0 +1 @@
|
|||
../../../../aux/plugins/netmap/README
|
|
@ -21,6 +21,7 @@ current, independent component releases.
|
|||
Broker - User Manual <broker/broker-manual.rst>
|
||||
BroControl - Interactive Bro management shell <broctl/README>
|
||||
Bro-Aux - Small auxiliary tools for Bro <bro-aux/README>
|
||||
Bro-Plugins - A collection of plugins for Bro <bro-plugins/README>
|
||||
BTest - A unit testing framework <btest/README>
|
||||
Capstats - Command-line packet statistic tool <capstats/README>
|
||||
PySubnetTree - Python module for CIDR lookups<pysubnettree/README>
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
Writing Bro Plugins
|
||||
===================
|
||||
|
||||
Bro internally provides plugin API that enables extending
|
||||
Bro internally provides a plugin API that enables extending
|
||||
the system dynamically, without modifying the core code base. That way
|
||||
custom code remains self-contained and can be maintained, compiled,
|
||||
and installed independently. Currently, plugins can add the following
|
||||
|
@ -32,7 +32,7 @@ Quick Start
|
|||
===========
|
||||
|
||||
Writing a basic plugin is quite straight-forward as long as one
|
||||
follows a few conventions. In the following we walk a simple example
|
||||
follows a few conventions. In the following we create a simple example
|
||||
plugin that adds a new built-in function (bif) to Bro: we'll add
|
||||
``rot13(s: string) : string``, a function that rotates every character
|
||||
in a string by 13 places.
|
||||
|
@ -81,7 +81,7 @@ The syntax of this file is just like any other ``*.bif`` file; we
|
|||
won't go into it here.
|
||||
|
||||
Now we can already compile our plugin, we just need to tell the
|
||||
configure script that ``init-plugin`` put in place where the Bro
|
||||
configure script (that ``init-plugin`` created) where the Bro
|
||||
source tree is located (Bro needs to have been built there first)::
|
||||
|
||||
# cd rot13-plugin
|
||||
|
@ -99,7 +99,7 @@ option::
|
|||
# export BRO_PLUGIN_PATH=/path/to/rot13-plugin/build
|
||||
# bro -N
|
||||
[...]
|
||||
Plugin: Demo::Rot13 - <Insert brief description of plugin> (dynamic, version 1)
|
||||
Demo::Rot13 - <Insert description> (dynamic, version 0.1)
|
||||
[...]
|
||||
|
||||
That looks quite good, except for the dummy description that we should
|
||||
|
@ -108,28 +108,30 @@ is about. We do this by editing the ``config.description`` line in
|
|||
``src/Plugin.cc``, like this::
|
||||
|
||||
[...]
|
||||
plugin::Configuration Configure()
|
||||
plugin::Configuration Plugin::Configure()
|
||||
{
|
||||
plugin::Configuration config;
|
||||
config.name = "Demo::Rot13";
|
||||
config.description = "Caesar cipher rotating a string's characters by 13 places.";
|
||||
config.version.major = 1;
|
||||
config.version.minor = 0;
|
||||
config.version.major = 0;
|
||||
config.version.minor = 1;
|
||||
return config;
|
||||
}
|
||||
[...]
|
||||
|
||||
Now rebuild and verify that the description is visible::
|
||||
|
||||
# make
|
||||
[...]
|
||||
# bro -N | grep Rot13
|
||||
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1)
|
||||
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 0.1)
|
||||
|
||||
Better. Bro can also show us what exactly the plugin provides with the
|
||||
Bro can also show us what exactly the plugin provides with the
|
||||
more verbose option ``-NN``::
|
||||
|
||||
# bro -NN
|
||||
[...]
|
||||
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1)
|
||||
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 0.1)
|
||||
[Function] Demo::rot13
|
||||
[...]
|
||||
|
||||
|
@ -157,10 +159,12 @@ The installed version went into
|
|||
``<bro-install-prefix>/lib/bro/plugins/Demo_Rot13``.
|
||||
|
||||
One can distribute the plugin independently of Bro for others to use.
|
||||
To distribute in source form, just remove the ``build/`` (``make
|
||||
distclean`` does that) and then tar up the whole ``rot13-plugin/``
|
||||
To distribute in source form, just remove the ``build/`` directory
|
||||
(``make distclean`` does that) and then tar up the whole ``rot13-plugin/``
|
||||
directory. Others then follow the same process as above after
|
||||
unpacking. To distribute the plugin in binary form, the build process
|
||||
unpacking.
|
||||
|
||||
To distribute the plugin in binary form, the build process
|
||||
conveniently creates a corresponding tarball in ``build/dist/``. In
|
||||
this case, it's called ``Demo_Rot13-0.1.tar.gz``, with the version
|
||||
number coming out of the ``VERSION`` file that ``init-plugin`` put
|
||||
|
@ -169,14 +173,14 @@ plugin, but no further source files. Optionally, one can include
|
|||
further files by specifying them in the plugin's ``CMakeLists.txt``
|
||||
through the ``bro_plugin_dist_files`` macro; the skeleton does that
|
||||
for ``README``, ``VERSION``, ``CHANGES``, and ``COPYING``. To use the
|
||||
plugin through the binary tarball, just unpack it and point
|
||||
``BRO_PLUGIN_PATH`` there; or copy it into
|
||||
``<bro-install-prefix>/lib/bro/plugins/`` directly.
|
||||
plugin through the binary tarball, just unpack it into
|
||||
``<bro-install-prefix>/lib/bro/plugins/``. Alternatively, if you unpack
|
||||
it in another location, then you need to point ``BRO_PLUGIN_PATH`` there.
|
||||
|
||||
Before distributing your plugin, you should edit some of the meta
|
||||
files that ``init-plugin`` puts in place. Edit ``README`` and
|
||||
``VERSION``, and update ``CHANGES`` when you make changes. Also put a
|
||||
license file in place as ``COPYING``; if BSD is fine, you find a
|
||||
license file in place as ``COPYING``; if BSD is fine, you will find a
|
||||
template in ``COPYING.edit-me``.
|
||||
|
||||
Plugin Directory Layout
|
||||
|
@ -193,7 +197,7 @@ directory. With the skeleton, ``<base>`` corresponds to ``build/``.
|
|||
must exist, and its content must consist of a single line with the
|
||||
qualified name of the plugin (e.g., "Demo::Rot13").
|
||||
|
||||
``<base>/lib/<plugin-name>-<os>-<arch>.so``
|
||||
``<base>/lib/<plugin-name>.<os>-<arch>.so``
|
||||
The shared library containing the plugin's compiled code. Bro will
|
||||
load this in dynamically at run-time if OS and architecture match
|
||||
the current platform.
|
||||
|
@ -215,8 +219,8 @@ directory. With the skeleton, ``<base>`` corresponds to ``build/``.
|
|||
Any other files in ``<base>`` are ignored by Bro.
|
||||
|
||||
By convention, a plugin should put its custom scripts into sub folders
|
||||
of ``scripts/``, i.e., ``scripts/<script-namespace>/<script>.bro`` to
|
||||
avoid conflicts. As usual, you can then put a ``__load__.bro`` in
|
||||
of ``scripts/``, i.e., ``scripts/<plugin-namespace>/<plugin-name>/<script>.bro``
|
||||
to avoid conflicts. As usual, you can then put a ``__load__.bro`` in
|
||||
there as well so that, e.g., ``@load Demo/Rot13`` could load a whole
|
||||
module in the form of multiple individual scripts.
|
||||
|
||||
|
@ -242,7 +246,8 @@ as well as the ``__bro_plugin__`` magic file and any further
|
|||
distribution files specified in ``CMakeLists.txt`` (e.g., README,
|
||||
VERSION). You can find a full list of files installed in
|
||||
``build/MANIFEST``. Behind the scenes, ``make install`` really just
|
||||
copies over the binary tarball in ``build/dist``.
|
||||
unpacks the binary tarball from ``build/dist`` into the destination
|
||||
directory.
|
||||
|
||||
``init-plugin`` will never overwrite existing files. If its target
|
||||
directory already exists, it will by default decline to do anything.
|
||||
|
@ -369,18 +374,19 @@ Testing Plugins
|
|||
===============
|
||||
|
||||
A plugin should come with a test suite to exercise its functionality.
|
||||
The ``init-plugin`` script puts in place a basic </btest/README> setup
|
||||
The ``init-plugin`` script puts in place a basic
|
||||
:doc:`BTest <../../components/btest/README>` setup
|
||||
to start with. Initially, it comes with a single test that just checks
|
||||
that Bro loads the plugin correctly. It won't have a baseline yet, so
|
||||
let's get that in place::
|
||||
|
||||
# cd tests
|
||||
# btest -d
|
||||
[ 0%] plugin.loading ... failed
|
||||
[ 0%] rot13.show-plugin ... failed
|
||||
% 'btest-diff output' failed unexpectedly (exit code 100)
|
||||
% cat .diag
|
||||
== File ===============================
|
||||
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1.0)
|
||||
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 0.1)
|
||||
[Function] Demo::rot13
|
||||
|
||||
== Error ===============================
|
||||
|
@ -413,8 +419,8 @@ correctly::
|
|||
|
||||
Check the output::
|
||||
|
||||
# btest -d plugin/rot13.bro
|
||||
[ 0%] plugin.rot13 ... failed
|
||||
# btest -d rot13/bif-rot13.bro
|
||||
[ 0%] rot13.bif-rot13 ... failed
|
||||
% 'btest-diff output' failed unexpectedly (exit code 100)
|
||||
% cat .diag
|
||||
== File ===============================
|
||||
|
@ -429,7 +435,7 @@ Check the output::
|
|||
|
||||
Install the baseline::
|
||||
|
||||
# btest -U plugin/rot13.bro
|
||||
# btest -U rot13/bif-rot13.bro
|
||||
all 1 tests successful
|
||||
|
||||
Run the test-suite::
|
||||
|
@ -457,7 +463,7 @@ your plugin's debugging output with ``-B plugin-<name>``, where
|
|||
``<name>`` is the name of the plugin as returned by its
|
||||
``Configure()`` method, yet with the namespace-separator ``::``
|
||||
replaced with a simple dash. Example: If the plugin is called
|
||||
``Bro::Demo``, use ``-B plugin-Bro-Demo``. As usual, the debugging
|
||||
``Demo::Rot13``, use ``-B plugin-Demo-Rot13``. As usual, the debugging
|
||||
output will be recorded to ``debug.log`` if Bro's compiled in debug
|
||||
mode.
|
||||
|
||||
|
|
|
@ -67,8 +67,8 @@ that are present in the ASCII log files::
|
|||
'id.orig_p' integer,
|
||||
...
|
||||
|
||||
Note that the ASCII ``conn.log`` will still be created. To disable the ASCII writer for a
|
||||
log stream, you can remove the default filter:
|
||||
Note that the ASCII ``conn.log`` will still be created. To prevent this file
|
||||
from being created, you can remove the default filter:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
|
|
|
@ -19,195 +19,144 @@ Terminology
|
|||
|
||||
Bro's logging interface is built around three main abstractions:
|
||||
|
||||
Log streams
|
||||
A stream corresponds to a single log. It defines the set of
|
||||
fields that a log consists of with their names and fields.
|
||||
Examples are the ``conn`` for recording connection summaries,
|
||||
Streams
|
||||
A log stream corresponds to a single log. It defines the set of
|
||||
fields that a log consists of with their names and types.
|
||||
Examples are the ``conn`` stream for recording connection summaries,
|
||||
and the ``http`` stream for recording HTTP activity.
|
||||
|
||||
Filters
|
||||
Each stream has a set of filters attached to it that determine
|
||||
what information gets written out. By default, each stream has
|
||||
one default filter that just logs everything directly to disk
|
||||
with an automatically generated file name. However, further
|
||||
filters can be added to record only a subset, split a stream
|
||||
into different outputs, or to even duplicate the log to
|
||||
multiple outputs. If all filters are removed from a stream,
|
||||
all output is disabled.
|
||||
one default filter that just logs everything directly to disk.
|
||||
However, additional filters can be added to record only a subset
|
||||
of the log records, write to different outputs, or set a custom
|
||||
rotation interval. If all filters are removed from a stream,
|
||||
then output is disabled for that stream.
|
||||
|
||||
Writers
|
||||
A writer defines the actual output format for the information
|
||||
being logged. At the moment, Bro comes with only one type of
|
||||
writer, which produces tab separated ASCII files. In the
|
||||
future we will add further writers, like for binary output and
|
||||
direct logging into a database.
|
||||
Each filter has a writer. A writer defines the actual output
|
||||
format for the information being logged. The default writer is
|
||||
the ASCII writer, which produces tab-separated ASCII files. Other
|
||||
writers are available, like for binary output or direct logging
|
||||
into a database.
|
||||
|
||||
Basics
|
||||
======
|
||||
There are several different ways to customize Bro's logging: you can create
|
||||
a new log stream, you can extend an existing log with new fields, you
|
||||
can apply filters to an existing log stream, or you can customize the output
|
||||
format by setting log writer options. All of these approaches are
|
||||
described in this document.
|
||||
|
||||
The data fields that a stream records are defined by a record type
|
||||
specified when it is created. Let's look at the script generating Bro's
|
||||
connection summaries as an example,
|
||||
:doc:`/scripts/base/protocols/conn/main.bro`. It defines a record
|
||||
:bro:type:`Conn::Info` that lists all the fields that go into
|
||||
``conn.log``, each marked with a ``&log`` attribute indicating that it
|
||||
is part of the information written out. To write a log record, the
|
||||
script then passes an instance of :bro:type:`Conn::Info` to the logging
|
||||
framework's :bro:id:`Log::write` function.
|
||||
Streams
|
||||
=======
|
||||
|
||||
By default, each stream automatically gets a filter named ``default``
|
||||
that generates the normal output by recording all record fields into a
|
||||
single output file.
|
||||
In order to log data to a new log stream, all of the following needs to be
|
||||
done:
|
||||
|
||||
In the following, we summarize ways in which the logging can be
|
||||
customized. We continue using the connection summaries as our example
|
||||
to work with.
|
||||
- A :bro:type:`record` type must be defined which consists of all the
|
||||
fields that will be logged (by convention, the name of this record type is
|
||||
usually "Info").
|
||||
- A log stream ID (an :bro:type:`enum` with type name "Log::ID") must be
|
||||
defined that uniquely identifies the new log stream.
|
||||
- A log stream must be created using the :bro:id:`Log::create_stream` function.
|
||||
- When the data to be logged becomes available, the :bro:id:`Log::write`
|
||||
function must be called.
|
||||
|
||||
Filtering
|
||||
---------
|
||||
|
||||
To create a new output file for an existing stream, you can add a
|
||||
new filter. A filter can, e.g., restrict the set of fields being
|
||||
logged:
|
||||
In the following example, we create a new module "Foo" which creates
|
||||
a new log stream.
|
||||
|
||||
.. code:: bro
|
||||
|
||||
event bro_init()
|
||||
module Foo;
|
||||
|
||||
export {
|
||||
# Create an ID for our new stream. By convention, this is
|
||||
# called "LOG".
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
# Define the record type that will contain the data to log.
|
||||
type Info: record {
|
||||
ts: time &log;
|
||||
id: conn_id &log;
|
||||
service: string &log &optional;
|
||||
missed_bytes: count &log &default=0;
|
||||
};
|
||||
}
|
||||
|
||||
# Optionally, we can add a new field to the connection record so that
|
||||
# the data we are logging (our "Info" record) will be easily
|
||||
# accessible in a variety of event handlers.
|
||||
redef record connection += {
|
||||
# By convention, the name of this new field is the lowercase name
|
||||
# of the module.
|
||||
foo: Info &optional;
|
||||
};
|
||||
|
||||
# This event is handled at a priority higher than zero so that if
|
||||
# users modify this stream in another script, they can do so at the
|
||||
# default priority of zero.
|
||||
event bro_init() &priority=5
|
||||
{
|
||||
# Add a new filter to the Conn::LOG stream that logs only
|
||||
# timestamp and originator address.
|
||||
local filter: Log::Filter = [$name="orig-only", $path="origs", $include=set("ts", "id.orig_h")];
|
||||
Log::add_filter(Conn::LOG, filter);
|
||||
# Create the stream. This adds a default filter automatically.
|
||||
Log::create_stream(Foo::LOG, [$columns=Info, $path="foo"]);
|
||||
}
|
||||
|
||||
Note the fields that are set for the filter:
|
||||
In the definition of the "Info" record above, notice that each field has the
|
||||
:bro:attr:`&log` attribute. Without this attribute, a field will not appear in
|
||||
the log output. Also notice one field has the :bro:attr:`&optional` attribute.
|
||||
This indicates that the field might not be assigned any value before the
|
||||
log record is written. Finally, a field with the :bro:attr:`&default`
|
||||
attribute has a default value assigned to it automatically.
|
||||
|
||||
``name``
|
||||
A mandatory name for the filter that can later be used
|
||||
to manipulate it further.
|
||||
|
||||
``path``
|
||||
The filename for the output file, without any extension (which
|
||||
may be automatically added by the writer). Default path values
|
||||
are generated by taking the stream's ID and munging it slightly.
|
||||
:bro:enum:`Conn::LOG` is converted into ``conn``,
|
||||
:bro:enum:`PacketFilter::LOG` is converted into
|
||||
``packet_filter``, and :bro:enum:`Known::CERTS_LOG` is
|
||||
converted into ``known_certs``.
|
||||
|
||||
``include``
|
||||
A set limiting the fields to the ones given. The names
|
||||
correspond to those in the :bro:type:`Conn::Info` record, with
|
||||
sub-records unrolled by concatenating fields (separated with
|
||||
dots).
|
||||
|
||||
Using the code above, you will now get a new log file ``origs.log``
|
||||
that looks like this::
|
||||
|
||||
#separator \x09
|
||||
#path origs
|
||||
#fields ts id.orig_h
|
||||
#types time addr
|
||||
1128727430.350788 141.42.64.125
|
||||
1128727435.450898 141.42.64.125
|
||||
|
||||
If you want to make this the only log file for the stream, you can
|
||||
remove the default filter (which, conveniently, has the name
|
||||
``default``):
|
||||
At this point, the only thing missing is a call to the :bro:id:`Log::write`
|
||||
function to send data to the logging framework. The actual event handler
|
||||
where this should take place will depend on where your data becomes available.
|
||||
In this example, the :bro:id:`connection_established` event provides our data,
|
||||
and we also store a copy of the data being logged into the
|
||||
:bro:type:`connection` record:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
event bro_init()
|
||||
event connection_established(c: connection)
|
||||
{
|
||||
# Remove the filter called "default".
|
||||
Log::remove_filter(Conn::LOG, "default");
|
||||
local rec: Foo::Info = [$ts=network_time(), $id=c$id];
|
||||
|
||||
# Store a copy of the data in the connection record so other
|
||||
# event handlers can access it.
|
||||
c$foo = rec;
|
||||
|
||||
Log::write(Foo::LOG, rec);
|
||||
}
|
||||
|
||||
An alternate approach to "turning off" a log is to completely disable
|
||||
the stream:
|
||||
If you run Bro with this script, a new log file ``foo.log`` will be created.
|
||||
Although we only specified four fields in the "Info" record above, the
|
||||
log output will actually contain seven fields because one of the fields
|
||||
(the one named "id") is itself a record type. Since a :bro:type:`conn_id`
|
||||
record has four fields, then each of these fields is a separate column in
|
||||
the log output. Note that the way that such fields are named in the log
|
||||
output differs slightly from the way we would refer to the same field
|
||||
in a Bro script (each dollar sign is replaced with a period). For example,
|
||||
to access the first field of a ``conn_id`` in a Bro script we would use
|
||||
the notation ``id$orig_h``, but that field is named ``id.orig_h``
|
||||
in the log output.
|
||||
|
||||
.. code:: bro
|
||||
When you are developing scripts that add data to the :bro:type:`connection`
|
||||
record, care must be given to when and how long data is stored.
|
||||
Normally data saved to the connection record will remain there for the
|
||||
duration of the connection and from a practical perspective it's not
|
||||
uncommon to need to delete that data before the end of the connection.
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
Log::disable_stream(Conn::LOG);
|
||||
}
|
||||
|
||||
If you want to skip only some fields but keep the rest, there is a
|
||||
corresponding ``exclude`` filter attribute that you can use instead of
|
||||
``include`` to list only the ones you are not interested in.
|
||||
Add Fields to a Log
|
||||
-------------------
|
||||
|
||||
A filter can also determine output paths *dynamically* based on the
|
||||
record being logged. That allows, e.g., to record local and remote
|
||||
connections into separate files. To do this, you define a function
|
||||
that returns the desired path:
|
||||
You can add additional fields to a log by extending the record
|
||||
type that defines its content, and setting a value for the new fields
|
||||
before each log record is written.
|
||||
|
||||
.. code:: bro
|
||||
|
||||
function split_log(id: Log::ID, path: string, rec: Conn::Info) : string
|
||||
{
|
||||
# Return "conn-local" if originator is a local IP, otherwise "conn-remote".
|
||||
local lr = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
|
||||
return fmt("%s-%s", path, lr);
|
||||
}
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
local filter: Log::Filter = [$name="conn-split", $path_func=split_log, $include=set("ts", "id.orig_h")];
|
||||
Log::add_filter(Conn::LOG, filter);
|
||||
}
|
||||
|
||||
Running this will now produce two files, ``local.log`` and
|
||||
``remote.log``, with the corresponding entries. One could extend this
|
||||
further for example to log information by subnets or even by IP
|
||||
address. Be careful, however, as it is easy to create many files very
|
||||
quickly ...
|
||||
|
||||
.. sidebar:: A More Generic Path Function
|
||||
|
||||
The ``split_log`` method has one draw-back: it can be used
|
||||
only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded
|
||||
into its argument list. However, Bro allows to do a more generic
|
||||
variant:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
function split_log(id: Log::ID, path: string, rec: record { id: conn_id; } ) : string
|
||||
{
|
||||
return Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
|
||||
}
|
||||
|
||||
This function can be used with all log streams that have records
|
||||
containing an ``id: conn_id`` field.
|
||||
|
||||
While so far we have seen how to customize the columns being logged,
|
||||
you can also control which records are written out by providing a
|
||||
predicate that will be called for each log record:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
function http_only(rec: Conn::Info) : bool
|
||||
{
|
||||
# Record only connections with successfully analyzed HTTP traffic
|
||||
return rec$service == "http";
|
||||
}
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
local filter: Log::Filter = [$name="http-only", $path="conn-http", $pred=http_only];
|
||||
Log::add_filter(Conn::LOG, filter);
|
||||
}
|
||||
|
||||
This will result in a log file ``conn-http.log`` that contains only
|
||||
traffic detected and analyzed as HTTP traffic.
|
||||
|
||||
Extending
|
||||
---------
|
||||
|
||||
You can add further fields to a log stream by extending the record
|
||||
type that defines its content. Let's say we want to add a boolean
|
||||
field ``is_private`` to :bro:type:`Conn::Info` that indicates whether the
|
||||
originator IP address is part of the :rfc:`1918` space:
|
||||
Let's say we want to add a boolean field ``is_private`` to
|
||||
:bro:type:`Conn::Info` that indicates whether the originator IP address
|
||||
is part of the :rfc:`1918` space:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
|
@ -218,9 +167,21 @@ originator IP address is part of the :rfc:`1918` space:
|
|||
is_private: bool &default=F &log;
|
||||
};
|
||||
|
||||
As this example shows, when extending a log stream's "Info" record, each
|
||||
new field must always be declared either with a ``&default`` value or
|
||||
as ``&optional``. Furthermore, you need to add the ``&log`` attribute
|
||||
or otherwise the field won't appear in the log file.
|
||||
|
||||
Now we need to set the field. A connection's summary is generated at
|
||||
the time its state is removed from memory. We can add another handler
|
||||
Now we need to set the field. Although the details vary depending on which
|
||||
log is being extended, in general it is important to choose a suitable event
|
||||
in which to set the additional fields because we need to make sure that
|
||||
the fields are set before the log record is written. Sometimes the right
|
||||
choice is the same event which writes the log record, but at a higher
|
||||
priority (in order to ensure that the event handler that sets the additional
|
||||
fields is executed before the event handler that writes the log record).
|
||||
|
||||
In this example, since a connection's summary is generated at
|
||||
the time its state is removed from memory, we can add another handler
|
||||
at that time that sets our field correctly:
|
||||
|
||||
.. code:: bro
|
||||
|
@ -232,31 +193,58 @@ at that time that sets our field correctly:
|
|||
}
|
||||
|
||||
Now ``conn.log`` will show a new field ``is_private`` of type
|
||||
``bool``.
|
||||
``bool``. If you look at the Bro script which defines the connection
|
||||
log stream :doc:`/scripts/base/protocols/conn/main.bro`, you will see
|
||||
that ``Log::write`` gets called in an event handler for the
|
||||
same event as used in this example to set the additional fields, but at a
|
||||
lower priority than the one used in this example (i.e., the log record gets
|
||||
written after we assign the ``is_private`` field).
|
||||
|
||||
Notes:
|
||||
For extending logs this way, one needs a bit of knowledge about how
|
||||
the script that creates the log stream is organizing its state
|
||||
keeping. Most of the standard Bro scripts attach their log state to
|
||||
the :bro:type:`connection` record where it can then be accessed, just
|
||||
like ``c$conn`` above. For example, the HTTP analysis adds a field
|
||||
``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection`
|
||||
record.
|
||||
|
||||
- For extending logs this way, one needs a bit of knowledge about how
|
||||
the script that creates the log stream is organizing its state
|
||||
keeping. Most of the standard Bro scripts attach their log state to
|
||||
the :bro:type:`connection` record where it can then be accessed, just
|
||||
as the ``c$conn`` above. For example, the HTTP analysis adds a field
|
||||
``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection`
|
||||
record. See the script reference for more information.
|
||||
|
||||
- When extending records as shown above, the new fields must always be
|
||||
declared either with a ``&default`` value or as ``&optional``.
|
||||
Furthermore, you need to add the ``&log`` attribute or otherwise the
|
||||
field won't appear in the output.
|
||||
|
||||
Hooking into the Logging
|
||||
------------------------
|
||||
Define a Logging Event
|
||||
----------------------
|
||||
|
||||
Sometimes it is helpful to do additional analysis of the information
|
||||
being logged. For these cases, a stream can specify an event that will
|
||||
be generated every time a log record is written to it. All of Bro's
|
||||
default log streams define such an event. For example, the connection
|
||||
log stream raises the event :bro:id:`Conn::log_conn`. You
|
||||
be generated every time a log record is written to it. To do this, we
|
||||
need to modify the example module shown above to look something like this:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
module Foo;
|
||||
|
||||
export {
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
type Info: record {
|
||||
ts: time &log;
|
||||
id: conn_id &log;
|
||||
service: string &log &optional;
|
||||
missed_bytes: count &log &default=0;
|
||||
};
|
||||
|
||||
# Define a logging event. By convention, this is called
|
||||
# "log_<stream>".
|
||||
global log_foo: event(rec: Info);
|
||||
}
|
||||
|
||||
event bro_init() &priority=5
|
||||
{
|
||||
# Specify the "log_foo" event here in order for Bro to raise it.
|
||||
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo,
|
||||
$path="foo"]);
|
||||
}
|
||||
|
||||
All of Bro's default log streams define such an event. For example, the
|
||||
connection log stream raises the event :bro:id:`Conn::log_conn`. You
|
||||
could use that for example for flagging when a connection to a
|
||||
specific destination exceeds a certain duration:
|
||||
|
||||
|
@ -270,7 +258,7 @@ specific destination exceeds a certain duration:
|
|||
|
||||
event Conn::log_conn(rec: Conn::Info)
|
||||
{
|
||||
if ( rec$duration > 5mins )
|
||||
if ( rec?$duration && rec$duration > 5mins )
|
||||
NOTICE([$note=Long_Conn_Found,
|
||||
$msg=fmt("unusually long conn to %s", rec$id$resp_h),
|
||||
$id=rec$id]);
|
||||
|
@ -281,15 +269,196 @@ externally with Perl scripts. Much of what such an external script
|
|||
would do later offline, one may instead do directly inside of Bro in
|
||||
real-time.
|
||||
|
||||
Rotation
|
||||
--------
|
||||
Disable a Stream
|
||||
----------------
|
||||
|
||||
By default, no log rotation occurs, but it's globally controllable for all
|
||||
filters by redefining the :bro:id:`Log::default_rotation_interval` option:
|
||||
One way to "turn off" a log is to completely disable the stream. For
|
||||
example, the following example will prevent the conn.log from being written:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
redef Log::default_rotation_interval = 1 hr;
|
||||
event bro_init()
|
||||
{
|
||||
Log::disable_stream(Conn::LOG);
|
||||
}
|
||||
|
||||
Note that this must run after the stream is created, so the priority
|
||||
of this event handler must be lower than the priority of the event handler
|
||||
where the stream was created.
|
||||
|
||||
|
||||
Filters
|
||||
=======
|
||||
|
||||
A stream has one or more filters attached to it (a stream without any filters
|
||||
will not produce any log output). When a stream is created, it automatically
|
||||
gets a default filter attached to it. This default filter can be removed
|
||||
or replaced, or other filters can be added to the stream. This is accomplished
|
||||
by using either the :bro:id:`Log::add_filter` or :bro:id:`Log::remove_filter`
|
||||
function. This section shows how to use filters to do such tasks as
|
||||
rename a log file, split the output into multiple files, control which
|
||||
records are written, and set a custom rotation interval.
|
||||
|
||||
Rename Log File
|
||||
---------------
|
||||
|
||||
Normally, the log filename for a given log stream is determined when the
|
||||
stream is created, unless you explicitly specify a different one by adding
|
||||
a filter.
|
||||
|
||||
The easiest way to change a log filename is to simply replace the
|
||||
default log filter with a new filter that specifies a value for the "path"
|
||||
field. In this example, "conn.log" will be changed to "myconn.log":
|
||||
|
||||
.. code:: bro
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
# Replace default filter for the Conn::LOG stream in order to
|
||||
# change the log filename.
|
||||
|
||||
local f = Log::get_filter(Conn::LOG, "default");
|
||||
f$path = "myconn";
|
||||
Log::add_filter(Conn::LOG, f);
|
||||
}
|
||||
|
||||
Keep in mind that the "path" field of a log filter never contains the
|
||||
filename extension. The extension will be determined later by the log writer.
|
||||
|
||||
Add a New Log File
|
||||
------------------
|
||||
|
||||
Normally, a log stream writes to only one log file. However, you can
|
||||
add filters so that the stream writes to multiple files. This is useful
|
||||
if you want to restrict the set of fields being logged to the new file.
|
||||
|
||||
In this example, a new filter is added to the Conn::LOG stream that writes
|
||||
two fields to a new log file:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
# Add a new filter to the Conn::LOG stream that logs only
|
||||
# timestamp and originator address.
|
||||
|
||||
local filter: Log::Filter = [$name="orig-only", $path="origs",
|
||||
$include=set("ts", "id.orig_h")];
|
||||
Log::add_filter(Conn::LOG, filter);
|
||||
}
|
||||
|
||||
|
||||
Notice how the "include" filter attribute specifies a set that limits the
|
||||
fields to the ones given. The names correspond to those in the
|
||||
:bro:type:`Conn::Info` record (however, because the "id" field is itself a
|
||||
record, we can specify an individual field of "id" by the dot notation
|
||||
shown in the example).
|
||||
|
||||
Using the code above, in addition to the regular ``conn.log``, you will
|
||||
now also get a new log file ``origs.log`` that looks like the regular
|
||||
``conn.log``, but will have only the fields specified in the "include"
|
||||
filter attribute.
|
||||
|
||||
If you want to skip only some fields but keep the rest, there is a
|
||||
corresponding ``exclude`` filter attribute that you can use instead of
|
||||
``include`` to list only the ones you are not interested in.
|
||||
|
||||
If you want to make this the only log file for the stream, you can
|
||||
remove the default filter:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
# Remove the filter called "default".
|
||||
Log::remove_filter(Conn::LOG, "default");
|
||||
}
|
||||
|
||||
Determine Log Path Dynamically
|
||||
------------------------------
|
||||
|
||||
Instead of using the "path" filter attribute, a filter can determine
|
||||
output paths *dynamically* based on the record being logged. That
|
||||
allows, e.g., to record local and remote connections into separate
|
||||
files. To do this, you define a function that returns the desired path,
|
||||
and use the "path_func" filter attribute:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
# Note: if using BroControl then you don't need to redef local_nets.
|
||||
redef Site::local_nets = { 192.168.0.0/16 };
|
||||
|
||||
function myfunc(id: Log::ID, path: string, rec: Conn::Info) : string
|
||||
{
|
||||
# Return "conn-local" if originator is a local IP, otherwise
|
||||
# return "conn-remote".
|
||||
local r = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
|
||||
return fmt("%s-%s", path, r);
|
||||
}
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
local filter: Log::Filter = [$name="conn-split",
|
||||
$path_func=myfunc, $include=set("ts", "id.orig_h")];
|
||||
Log::add_filter(Conn::LOG, filter);
|
||||
}
|
||||
|
||||
Running this will now produce two new files, ``conn-local.log`` and
|
||||
``conn-remote.log``, with the corresponding entries (for this example to work,
|
||||
the ``Site::local_nets`` must specify your local network). One could extend
|
||||
this further for example to log information by subnets or even by IP
|
||||
address. Be careful, however, as it is easy to create many files very
|
||||
quickly.
|
||||
|
||||
The ``myfunc`` function has one drawback: it can be used
|
||||
only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded
|
||||
into its argument list. However, Bro allows to do a more generic
|
||||
variant:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
function myfunc(id: Log::ID, path: string,
|
||||
rec: record { id: conn_id; } ) : string
|
||||
{
|
||||
local r = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
|
||||
return fmt("%s-%s", path, r);
|
||||
}
|
||||
|
||||
This function can be used with all log streams that have records
|
||||
containing an ``id: conn_id`` field.
|
||||
|
||||
Filter Log Records
|
||||
------------------
|
||||
|
||||
We have seen how to customize the columns being logged, but
|
||||
you can also control which records are written out by providing a
|
||||
predicate that will be called for each log record:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
function http_only(rec: Conn::Info) : bool
|
||||
{
|
||||
# Record only connections with successfully analyzed HTTP traffic
|
||||
return rec?$service && rec$service == "http";
|
||||
}
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
local filter: Log::Filter = [$name="http-only", $path="conn-http",
|
||||
$pred=http_only];
|
||||
Log::add_filter(Conn::LOG, filter);
|
||||
}
|
||||
|
||||
This will result in a new log file ``conn-http.log`` that contains only
|
||||
the log records from ``conn.log`` that are analyzed as HTTP traffic.
|
||||
|
||||
Rotation
|
||||
--------
|
||||
|
||||
The log rotation interval is globally controllable for all
|
||||
filters by redefining the :bro:id:`Log::default_rotation_interval` option
|
||||
(note that when using BroControl, this option is set automatically via
|
||||
the BroControl configuration).
|
||||
|
||||
Or specifically for certain :bro:type:`Log::Filter` instances by setting
|
||||
their ``interv`` field. Here's an example of changing just the
|
||||
|
@ -301,90 +470,73 @@ their ``interv`` field. Here's an example of changing just the
|
|||
{
|
||||
local f = Log::get_filter(Conn::LOG, "default");
|
||||
f$interv = 1 min;
|
||||
Log::remove_filter(Conn::LOG, "default");
|
||||
Log::add_filter(Conn::LOG, f);
|
||||
}
|
||||
|
||||
ASCII Writer Configuration
|
||||
--------------------------
|
||||
Writers
|
||||
=======
|
||||
|
||||
The ASCII writer has a number of options for customizing the format of
|
||||
its output, see :doc:`/scripts/base/frameworks/logging/writers/ascii.bro`.
|
||||
Each filter has a writer. If you do not specify a writer when adding a
|
||||
filter to a stream, then the ASCII writer is the default.
|
||||
|
||||
Adding Streams
|
||||
==============
|
||||
There are two ways to specify a non-default writer. To change the default
|
||||
writer for all log filters, just redefine the :bro:id:`Log::default_writer`
|
||||
option. Alternatively, you can specify the writer to use on a per-filter
|
||||
basis by setting a value for the filter's "writer" field. Consult the
|
||||
documentation of the writer to use to see if there are other options that are
|
||||
needed.
|
||||
|
||||
It's easy to create a new log stream for custom scripts. Here's an
|
||||
example for the ``Foo`` module:
|
||||
ASCII Writer
|
||||
------------
|
||||
|
||||
By default, the ASCII writer outputs log files that begin with several
|
||||
lines of metadata, followed by the actual log output. The metadata
|
||||
describes the format of the log file, the "path" of the log (i.e., the log
|
||||
filename without file extension), and also specifies the time that the log
|
||||
was created and the time when Bro finished writing to it.
|
||||
The ASCII writer has a number of options for customizing the format of its
|
||||
output, see :doc:`/scripts/base/frameworks/logging/writers/ascii.bro`.
|
||||
If you change the output format options, then be careful to check whether
|
||||
your postprocessing scripts can still recognize your log files.
|
||||
|
||||
Some writer options are global (i.e., they affect all log filters using
|
||||
that log writer). For example, to change the output format of all ASCII
|
||||
logs to JSON format:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
module Foo;
|
||||
redef LogAscii::use_json = T;
|
||||
|
||||
export {
|
||||
# Create an ID for our new stream. By convention, this is
|
||||
# called "LOG".
|
||||
redef enum Log::ID += { LOG };
|
||||
Some writer options are filter-specific (i.e., they affect only the filters
|
||||
that explicitly specify the option). For example, to change the output
|
||||
format of the ``conn.log`` only:
|
||||
|
||||
# Define the fields. By convention, the type is called "Info".
|
||||
type Info: record {
|
||||
ts: time &log;
|
||||
id: conn_id &log;
|
||||
};
|
||||
.. code:: bro
|
||||
|
||||
# Define a hook event. By convention, this is called
|
||||
# "log_<stream>".
|
||||
global log_foo: event(rec: Info);
|
||||
|
||||
}
|
||||
|
||||
# This event should be handled at a higher priority so that when
|
||||
# users modify your stream later and they do it at priority 0,
|
||||
# their code runs after this.
|
||||
event bro_init() &priority=5
|
||||
event bro_init()
|
||||
{
|
||||
# Create the stream. This also adds a default filter automatically.
|
||||
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo, $path="foo"]);
|
||||
local f = Log::get_filter(Conn::LOG, "default");
|
||||
# Use tab-separated-value mode
|
||||
f$config = table(["tsv"] = "T");
|
||||
Log::add_filter(Conn::LOG, f);
|
||||
}
|
||||
|
||||
You can also add the state to the :bro:type:`connection` record to make
|
||||
it easily accessible across event handlers:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
redef record connection += {
|
||||
foo: Info &optional;
|
||||
}
|
||||
|
||||
Now you can use the :bro:id:`Log::write` method to output log records and
|
||||
save the logged ``Foo::Info`` record into the connection record:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
event connection_established(c: connection)
|
||||
{
|
||||
local rec: Foo::Info = [$ts=network_time(), $id=c$id];
|
||||
c$foo = rec;
|
||||
Log::write(Foo::LOG, rec);
|
||||
}
|
||||
|
||||
See the existing scripts for how to work with such a new connection
|
||||
field. A simple example is :doc:`/scripts/base/protocols/syslog/main.bro`.
|
||||
|
||||
When you are developing scripts that add data to the :bro:type:`connection`
|
||||
record, care must be given to when and how long data is stored.
|
||||
Normally data saved to the connection record will remain there for the
|
||||
duration of the connection and from a practical perspective it's not
|
||||
uncommon to need to delete that data before the end of the connection.
|
||||
|
||||
Other Writers
|
||||
-------------
|
||||
|
||||
Bro supports the following built-in output formats other than ASCII:
|
||||
Bro supports the following additional built-in output formats:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
logging-input-sqlite
|
||||
|
||||
Further formats are available as external plugins.
|
||||
Additional writers are available as external plugins:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
../components/bro-plugins/dataseries/README
|
||||
../components/bro-plugins/elasticsearch/README
|
||||
|
||||
|
|
|
@ -8,10 +8,12 @@ How to Upgrade
|
|||
If you're doing an upgrade install (rather than a fresh install),
|
||||
there's two suggested approaches: either install Bro using the same
|
||||
installation prefix directory as before, or pick a new prefix and copy
|
||||
local customizations over. Regardless of which approach you choose,
|
||||
if you are using BroControl, then after upgrading Bro you will need to
|
||||
run "broctl check" (to verify that your new configuration is OK)
|
||||
and "broctl install" to complete the upgrade process.
|
||||
local customizations over.
|
||||
|
||||
Regardless of which approach you choose, if you are using BroControl, then
|
||||
before doing the upgrade you should stop all running Bro processes with the
|
||||
"broctl stop" command. After the upgrade is complete then you will need
|
||||
to run "broctl deploy".
|
||||
|
||||
In the following we summarize general guidelines for upgrading, see
|
||||
the :ref:`release-notes` for version-specific information.
|
||||
|
|
|
@ -46,8 +46,7 @@ To build Bro from source, the following additional dependencies are required:
|
|||
* zlib headers
|
||||
* Perl
|
||||
|
||||
To install the required dependencies, you can use (when done, make sure
|
||||
that ``bash`` and ``python`` are in your ``PATH``):
|
||||
To install the required dependencies, you can use:
|
||||
|
||||
* RPM/RedHat-based Linux:
|
||||
|
||||
|
@ -68,13 +67,17 @@ that ``bash`` and ``python`` are in your ``PATH``):
|
|||
|
||||
.. console::
|
||||
|
||||
sudo pkg_add -r bash cmake swig bison python perl py27-sqlite3
|
||||
sudo pkg install bash cmake swig bison python perl py27-sqlite3
|
||||
|
||||
Note that in older versions of FreeBSD, you might have to use the
|
||||
"pkg_add -r" command instead of "pkg install".
|
||||
|
||||
* Mac OS X:
|
||||
|
||||
Compiling source code on Macs requires first downloading Xcode_,
|
||||
then going through its "Preferences..." -> "Downloads" menus to
|
||||
install the "Command Line Tools" component.
|
||||
Compiling source code on Macs requires first installing Xcode_ (in older
|
||||
versions of Xcode, you would then need to go through its
|
||||
"Preferences..." -> "Downloads" menus to install the "Command Line Tools"
|
||||
component).
|
||||
|
||||
OS X comes with all required dependencies except for CMake_ and SWIG_.
|
||||
Distributions of these dependencies can likely be obtained from your
|
||||
|
@ -94,7 +97,6 @@ build time:
|
|||
* curl (used by a Bro script that implements active HTTP)
|
||||
* gperftools (tcmalloc is used to improve memory and CPU usage)
|
||||
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
|
||||
* Ruby executable, library, and headers (for Broccoli Ruby bindings)
|
||||
|
||||
LibGeoIP is probably the most interesting and can be installed
|
||||
on most platforms by following the instructions for :ref:`installing
|
||||
|
@ -119,8 +121,8 @@ platforms for binary releases and for installation instructions.
|
|||
|
||||
Linux based binary installations are usually performed by adding
|
||||
information about the Bro packages to the respective system packaging
|
||||
tool. Theen the usual system utilities such as ``apt``, ``yum``
|
||||
or ``zyppper`` are used to perforn the installation. By default,
|
||||
tool. Then the usual system utilities such as ``apt``, ``yum``
|
||||
or ``zypper`` are used to perform the installation. By default,
|
||||
installations of binary packages will go into ``/opt/bro``.
|
||||
|
||||
* MacOS Disk Image with Installer
|
||||
|
@ -131,7 +133,7 @@ platforms for binary releases and for installation instructions.
|
|||
The primary install prefix for binary packages is ``/opt/bro``.
|
||||
|
||||
Installing from Source
|
||||
==========================
|
||||
======================
|
||||
|
||||
Bro releases are bundled into source packages for convenience and are
|
||||
available on the `bro downloads page`_. Alternatively, the latest
|
||||
|
|
|
@ -24,9 +24,10 @@ Managing Bro with BroControl
|
|||
BroControl is an interactive shell for easily operating/managing Bro
|
||||
installations on a single system or even across multiple systems in a
|
||||
traffic-monitoring cluster. This section explains how to use BroControl
|
||||
to manage a stand-alone Bro installation. For instructions on how to
|
||||
configure a Bro cluster, see the :doc:`Cluster Configuration
|
||||
<../configuration/index>` documentation.
|
||||
to manage a stand-alone Bro installation. For a complete reference on
|
||||
BroControl, see the :doc:`BroControl <../components/broctl/README>`
|
||||
documentation. For instructions on how to configure a Bro cluster,
|
||||
see the :doc:`Cluster Configuration <../configuration/index>` documentation.
|
||||
|
||||
A Minimal Starting Configuration
|
||||
--------------------------------
|
||||
|
|
|
@ -173,14 +173,20 @@ Here is a more detailed explanation of each attribute:
|
|||
|
||||
Rotates a file after a specified interval.
|
||||
|
||||
Note: This attribute is deprecated and will be removed in a future release.
|
||||
|
||||
.. bro:attr:: &rotate_size
|
||||
|
||||
Rotates a file after it has reached a given size in bytes.
|
||||
|
||||
Note: This attribute is deprecated and will be removed in a future release.
|
||||
|
||||
.. bro:attr:: &encrypt
|
||||
|
||||
Encrypts files right before writing them to disk.
|
||||
|
||||
Note: This attribute is deprecated and will be removed in a future release.
|
||||
|
||||
.. bro:attr:: &raw_output
|
||||
|
||||
Opens a file in raw mode, i.e., non-ASCII characters are not
|
||||
|
@ -229,5 +235,4 @@ Here is a more detailed explanation of each attribute:
|
|||
|
||||
The associated identifier is marked as deprecated and will be
|
||||
removed in a future version of Bro. Look in the NEWS file for more
|
||||
explanation and/or instructions to migrate code that uses deprecated
|
||||
functionality.
|
||||
instructions to migrate code that uses deprecated functionality.
|
||||
|
|
|
@ -58,6 +58,23 @@ executed. Directives are evaluated before script execution begins.
|
|||
for that script are ignored).
|
||||
|
||||
|
||||
.. bro:keyword:: @load-plugin
|
||||
|
||||
Activate a dynamic plugin with the specified plugin name. The specified
|
||||
plugin must be located in Bro's plugin search path. Example::
|
||||
|
||||
@load-plugin Demo::Rot13
|
||||
|
||||
By default, Bro will automatically activate all dynamic plugins found
|
||||
in the plugin search path (the search path can be changed by setting
|
||||
the environment variable BRO_PLUGIN_PATH to a colon-separated list of
|
||||
directories). However, in bare mode ("bro -b"), dynamic plugins can be
|
||||
activated only by using "@load-plugin", or by specifying the full
|
||||
plugin name on the Bro command-line (e.g., "bro Demo::Rot13"), or by
|
||||
setting the environment variable BRO_PLUGIN_ACTIVATE to a
|
||||
comma-separated list of plugin names.
|
||||
|
||||
|
||||
.. bro:keyword:: @load-sigs
|
||||
|
||||
This works similarly to "@load", except that in this case the filename
|
||||
|
|
|
@ -26,13 +26,21 @@ Network Protocols
|
|||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| irc.log | IRC commands and responses | :bro:type:`IRC::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| kerberos.log | Kerberos | :bro:type:`KRB::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| modbus.log | Modbus commands and responses | :bro:type:`Modbus::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| modbus_register_change.log | Tracks changes to Modbus holding | :bro:type:`Modbus::MemmapInfo` |
|
||||
| | registers | |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| mysql.log | MySQL | :bro:type:`MySQL::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| radius.log | RADIUS authentication attempts | :bro:type:`RADIUS::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| rdp.log | RDP | :bro:type:`RDP::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| sip.log | SIP | :bro:type:`SIP::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| smtp.log | SMTP transactions | :bro:type:`SMTP::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| snmp.log | SNMP messages | :bro:type:`SNMP::Info` |
|
||||
|
@ -56,6 +64,8 @@ Files
|
|||
+============================+=======================================+=================================+
|
||||
| files.log | File analysis results | :bro:type:`Files::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| pe.log | Portable Executable (PE) | :bro:type:`PE::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
| x509.log | X.509 certificate info | :bro:type:`X509::Info` |
|
||||
+----------------------------+---------------------------------------+---------------------------------+
|
||||
|
||||
|
|
|
@ -258,8 +258,8 @@ Here are the statements that the Bro scripting language supports.
|
|||
|
||||
.. bro:keyword:: break
|
||||
|
||||
The "break" statement is used to break out of a :bro:keyword:`switch` or
|
||||
:bro:keyword:`for` statement.
|
||||
The "break" statement is used to break out of a :bro:keyword:`switch`,
|
||||
:bro:keyword:`for`, or :bro:keyword:`while` statement.
|
||||
|
||||
|
||||
.. bro:keyword:: delete
|
||||
|
@ -379,10 +379,10 @@ Here are the statements that the Bro scripting language supports.
|
|||
|
||||
.. bro:keyword:: next
|
||||
|
||||
The "next" statement can only appear within a :bro:keyword:`for` loop.
|
||||
It causes execution to skip to the next iteration.
|
||||
The "next" statement can only appear within a :bro:keyword:`for` or
|
||||
:bro:keyword:`while` loop. It causes execution to skip to the next
|
||||
iteration.
|
||||
|
||||
For an example, see the :bro:keyword:`for` statement.
|
||||
|
||||
.. bro:keyword:: print
|
||||
|
||||
|
@ -571,7 +571,7 @@ Here are the statements that the Bro scripting language supports.
|
|||
|
||||
.. bro:keyword:: while
|
||||
|
||||
A "while" loop iterates over a body statement as long a given
|
||||
A "while" loop iterates over a body statement as long as a given
|
||||
condition remains true.
|
||||
|
||||
A :bro:keyword:`break` statement can be used at any time to immediately
|
||||
|
@ -609,8 +609,8 @@ Here are the statements that the Bro scripting language supports.
|
|||
(outside of the braces) of a compound statement.
|
||||
|
||||
A compound statement is required in order to execute more than one
|
||||
statement in the body of a :bro:keyword:`for`, :bro:keyword:`if`, or
|
||||
:bro:keyword:`when` statement.
|
||||
statement in the body of a :bro:keyword:`for`, :bro:keyword:`while`,
|
||||
:bro:keyword:`if`, or :bro:keyword:`when` statement.
|
||||
|
||||
Example::
|
||||
|
||||
|
|
33
man/bro.8
33
man/bro.8
|
@ -51,12 +51,6 @@ add given prefix to policy file resolution
|
|||
\fB\-r\fR,\ \-\-readfile <readfile>
|
||||
read from given tcpdump file
|
||||
.TP
|
||||
\fB\-y\fR,\ \-\-flowfile <file>[=<ident>]
|
||||
read from given flow file
|
||||
.TP
|
||||
\fB\-Y\fR,\ \-\-netflow <ip>:<prt>[=<id>]
|
||||
read flow from socket
|
||||
.TP
|
||||
\fB\-s\fR,\ \-\-rulefile <rulefile>
|
||||
read rules from given file
|
||||
.TP
|
||||
|
@ -78,27 +72,21 @@ run the specified policy file analysis
|
|||
\fB\-C\fR,\ \-\-no\-checksums
|
||||
ignore checksums
|
||||
.TP
|
||||
\fB\-D\fR,\ \-\-dfa\-size <size>
|
||||
DFA state cache size
|
||||
.TP
|
||||
\fB\-F\fR,\ \-\-force\-dns
|
||||
force DNS
|
||||
.TP
|
||||
\fB\-I\fR,\ \-\-print\-id <ID name>
|
||||
print out given ID
|
||||
.TP
|
||||
\fB\-J\fR,\ \-\-set\-seed <seed>
|
||||
set the random number seed
|
||||
.TP
|
||||
\fB\-K\fR,\ \-\-md5\-hashkey <hashkey>
|
||||
set key for MD5\-keyed hashing
|
||||
.TP
|
||||
\fB\-L\fR,\ \-\-rule\-benchmark
|
||||
benchmark for rules
|
||||
.TP
|
||||
\fB\-N\fR,\ \-\-print\-plugins
|
||||
print available plugins and exit (\fB\-NN\fR for verbose)
|
||||
.TP
|
||||
\fB\-O\fR,\ \-\-optimize
|
||||
optimize policy script
|
||||
.TP
|
||||
\fB\-P\fR,\ \-\-prime\-dns
|
||||
prime DNS
|
||||
.TP
|
||||
|
@ -120,7 +108,7 @@ Record process status in file
|
|||
\fB\-W\fR,\ \-\-watchdog
|
||||
activate watchdog timer
|
||||
.TP
|
||||
\fB\-X\fR,\ \-\-broxygen
|
||||
\fB\-X\fR,\ \-\-broxygen <cfgfile>
|
||||
generate documentation based on config file
|
||||
.TP
|
||||
\fB\-\-pseudo\-realtime[=\fR<speedup>]
|
||||
|
@ -131,6 +119,19 @@ load seeds from given file
|
|||
.TP
|
||||
\fB\-\-save\-seeds\fR <file>
|
||||
save seeds to given file
|
||||
.TP
|
||||
The following option is available only when Bro is built with the \-\-enable\-debug configure option:
|
||||
.TP
|
||||
\fB\-B\fR,\ \-\-debug <dbgstreams>
|
||||
Enable debugging output for selected streams ('-B help' for help)
|
||||
.TP
|
||||
The following options are available only when Bro is built with gperftools support (use the \-\-enable\-perftools and \-\-enable\-perftools\-debug configure options):
|
||||
.TP
|
||||
\fB\-m\fR,\ \-\-mem-leaks
|
||||
show leaks
|
||||
.TP
|
||||
\fB\-M\fR,\ \-\-mem-profile
|
||||
record heap
|
||||
.SH ENVIRONMENT
|
||||
.TP
|
||||
.B BROPATH
|
||||
|
|
1
scripts/base/files/pe/README
Normal file
1
scripts/base/files/pe/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for Portable Executable (PE) file analysis.
|
2
scripts/base/frameworks/broker/README
Normal file
2
scripts/base/frameworks/broker/README
Normal file
|
@ -0,0 +1,2 @@
|
|||
The Broker communication framework facilitates connecting to remote Bro
|
||||
instances to share state and transfer events.
|
|
@ -78,6 +78,12 @@ signature file-coldfusion {
|
|||
file-magic /^([\x0d\x0a[:blank:]]*(<!--.*-->)?)*<(CFPARAM|CFSET|CFIF)/
|
||||
}
|
||||
|
||||
# Adobe Flash Media Manifest
|
||||
signature file-f4m {
|
||||
file-mime "application/f4m", 49
|
||||
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][aA][nN][iI][fF][eE][sS][tT][\x0d\x0a[:blank:]]{1,}xmlns=\"http:\/\/ns\.adobe\.com\/f4m\//
|
||||
}
|
||||
|
||||
# Microsoft LNK files
|
||||
signature file-lnk {
|
||||
file-mime "application/x-ms-shortcut", 49
|
||||
|
|
|
@ -6,9 +6,10 @@
|
|||
module Log;
|
||||
|
||||
export {
|
||||
## Type that defines an ID unique to each log stream. Scripts creating new log
|
||||
## streams need to redef this enum to add their own specific log ID. The log ID
|
||||
## implicitly determines the default name of the generated log file.
|
||||
## Type that defines an ID unique to each log stream. Scripts creating new
|
||||
## log streams need to redef this enum to add their own specific log ID.
|
||||
## The log ID implicitly determines the default name of the generated log
|
||||
## file.
|
||||
type Log::ID: enum {
|
||||
## Dummy place-holder.
|
||||
UNKNOWN
|
||||
|
@ -20,25 +21,24 @@ export {
|
|||
## If true, remote logging is by default enabled for all filters.
|
||||
const enable_remote_logging = T &redef;
|
||||
|
||||
## Default writer to use if a filter does not specify
|
||||
## anything else.
|
||||
## Default writer to use if a filter does not specify anything else.
|
||||
const default_writer = WRITER_ASCII &redef;
|
||||
|
||||
## Default separator between fields for logwriters.
|
||||
## Can be overwritten by individual writers.
|
||||
## Default separator to use between fields.
|
||||
## Individual writers can use a different value.
|
||||
const separator = "\t" &redef;
|
||||
|
||||
## Separator between set elements.
|
||||
## Can be overwritten by individual writers.
|
||||
## Default separator to use between elements of a set.
|
||||
## Individual writers can use a different value.
|
||||
const set_separator = "," &redef;
|
||||
|
||||
## String to use for empty fields. This should be different from
|
||||
## *unset_field* to make the output unambiguous.
|
||||
## Can be overwritten by individual writers.
|
||||
## Default string to use for empty fields. This should be different
|
||||
## from *unset_field* to make the output unambiguous.
|
||||
## Individual writers can use a different value.
|
||||
const empty_field = "(empty)" &redef;
|
||||
|
||||
## String to use for an unset &optional field.
|
||||
## Can be overwritten by individual writers.
|
||||
## Default string to use for an unset &optional field.
|
||||
## Individual writers can use a different value.
|
||||
const unset_field = "-" &redef;
|
||||
|
||||
## Type defining the content of a logging stream.
|
||||
|
@ -69,7 +69,7 @@ export {
|
|||
## If no ``path`` is defined for the filter, then the first call
|
||||
## to the function will contain an empty string.
|
||||
##
|
||||
## rec: An instance of the streams's ``columns`` type with its
|
||||
## rec: An instance of the stream's ``columns`` type with its
|
||||
## fields set to the values to be logged.
|
||||
##
|
||||
## Returns: The path to be used for the filter.
|
||||
|
@ -87,7 +87,8 @@ export {
|
|||
terminating: bool; ##< True if rotation occured due to Bro shutting down.
|
||||
};
|
||||
|
||||
## Default rotation interval. Zero disables rotation.
|
||||
## Default rotation interval to use for filters that do not specify
|
||||
## an interval. Zero disables rotation.
|
||||
##
|
||||
## Note that this is overridden by the BroControl LogRotationInterval
|
||||
## option.
|
||||
|
@ -122,8 +123,8 @@ export {
|
|||
## Indicates whether a log entry should be recorded.
|
||||
## If not given, all entries are recorded.
|
||||
##
|
||||
## rec: An instance of the streams's ``columns`` type with its
|
||||
## fields set to the values to logged.
|
||||
## rec: An instance of the stream's ``columns`` type with its
|
||||
## fields set to the values to be logged.
|
||||
##
|
||||
## Returns: True if the entry is to be recorded.
|
||||
pred: function(rec: any): bool &optional;
|
||||
|
@ -131,10 +132,10 @@ export {
|
|||
## Output path for recording entries matching this
|
||||
## filter.
|
||||
##
|
||||
## The specific interpretation of the string is up to
|
||||
## the used writer, and may for example be the destination
|
||||
## The specific interpretation of the string is up to the
|
||||
## logging writer, and may for example be the destination
|
||||
## file name. Generally, filenames are expected to be given
|
||||
## without any extensions; writers will add appropiate
|
||||
## without any extensions; writers will add appropriate
|
||||
## extensions automatically.
|
||||
##
|
||||
## If this path is found to conflict with another filter's
|
||||
|
@ -151,7 +152,7 @@ export {
|
|||
## easy to flood the disk by returning a new string for each
|
||||
## connection. Upon adding a filter to a stream, if neither
|
||||
## ``path`` nor ``path_func`` is explicitly set by them, then
|
||||
## :bro:see:`default_path_func` is used.
|
||||
## :bro:see:`Log::default_path_func` is used.
|
||||
##
|
||||
## id: The ID associated with the log stream.
|
||||
##
|
||||
|
@ -161,7 +162,7 @@ export {
|
|||
## then the first call to the function will contain an
|
||||
## empty string.
|
||||
##
|
||||
## rec: An instance of the streams's ``columns`` type with its
|
||||
## rec: An instance of the stream's ``columns`` type with its
|
||||
## fields set to the values to be logged.
|
||||
##
|
||||
## Returns: The path to be used for the filter, which will be
|
||||
|
@ -185,7 +186,7 @@ export {
|
|||
## If true, entries are passed on to remote peers.
|
||||
log_remote: bool &default=enable_remote_logging;
|
||||
|
||||
## Rotation interval.
|
||||
## Rotation interval. Zero disables rotation.
|
||||
interv: interval &default=default_rotation_interval;
|
||||
|
||||
## Callback function to trigger for rotated files. If not set, the
|
||||
|
@ -215,9 +216,9 @@ export {
|
|||
|
||||
## Removes a logging stream completely, stopping all the threads.
|
||||
##
|
||||
## id: The ID enum to be associated with the new logging stream.
|
||||
## id: The ID associated with the logging stream.
|
||||
##
|
||||
## Returns: True if a new stream was successfully removed.
|
||||
## Returns: True if the stream was successfully removed.
|
||||
##
|
||||
## .. bro:see:: Log::create_stream
|
||||
global remove_stream: function(id: ID) : bool;
|
||||
|
|
|
@ -1,15 +1,15 @@
|
|||
##! Interface for the ASCII log writer. Redefinable options are available
|
||||
##! to tweak the output format of ASCII logs.
|
||||
##!
|
||||
##! The ASCII writer supports currently one writer-specific filter option via
|
||||
##! ``config``: setting ``tsv`` to the string ``T`` turns the output into
|
||||
##! The ASCII writer currently supports one writer-specific per-filter config
|
||||
##! option: setting ``tsv`` to the string ``T`` turns the output into
|
||||
##! "tab-separated-value" mode where only a single header row with the column
|
||||
##! names is printed out as meta information, with no "# fields" prepended; no
|
||||
##! other meta data gets included in that mode.
|
||||
##! other meta data gets included in that mode. Example filter using this::
|
||||
##!
|
||||
##! Example filter using this::
|
||||
##!
|
||||
##! local my_filter: Log::Filter = [$name = "my-filter", $writer = Log::WRITER_ASCII, $config = table(["tsv"] = "T")];
|
||||
##! local f: Log::Filter = [$name = "my-filter",
|
||||
##! $writer = Log::WRITER_ASCII,
|
||||
##! $config = table(["tsv"] = "T")];
|
||||
##!
|
||||
|
||||
module LogAscii;
|
||||
|
@ -29,6 +29,8 @@ export {
|
|||
## Format of timestamps when writing out JSON. By default, the JSON
|
||||
## formatter will use double values for timestamps which represent the
|
||||
## number of seconds from the UNIX epoch.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
|
||||
|
||||
## If true, include lines with log meta information such as column names
|
||||
|
|
|
@ -19,7 +19,7 @@ export {
|
|||
const unset_field = Log::unset_field &redef;
|
||||
|
||||
## String to use for empty fields. This should be different from
|
||||
## *unset_field* to make the output unambiguous.
|
||||
## *unset_field* to make the output unambiguous.
|
||||
const empty_field = Log::empty_field &redef;
|
||||
}
|
||||
|
||||
|
|
|
@ -966,6 +966,11 @@ const tcp_max_above_hole_without_any_acks = 16384 &redef;
|
|||
## .. bro:see:: tcp_max_initial_window tcp_max_above_hole_without_any_acks
|
||||
const tcp_excessive_data_without_further_acks = 10 * 1024 * 1024 &redef;
|
||||
|
||||
## Number of TCP segments to buffer beyond what's been acknowledged already
|
||||
## to detect retransmission inconsistencies. Zero disables any additonal
|
||||
## buffering.
|
||||
const tcp_max_old_segments = 0 &redef;
|
||||
|
||||
## For services without a handler, these sets define originator-side ports
|
||||
## that still trigger reassembly.
|
||||
##
|
||||
|
|
|
@ -86,7 +86,7 @@ event gridftp_possibility_timeout(c: connection)
|
|||
{
|
||||
# only remove if we did not already detect it and the connection
|
||||
# is not yet at its end.
|
||||
if ( "gridftp-data" !in c$service && ! c$conn?$service )
|
||||
if ( "gridftp-data" !in c$service && ! (c?$conn && c$conn?$service) )
|
||||
{
|
||||
ConnThreshold::delete_bytes_threshold(c, size_threshold, T);
|
||||
ConnThreshold::delete_bytes_threshold(c, size_threshold, F);
|
||||
|
|
1
scripts/base/protocols/krb/README
Normal file
1
scripts/base/protocols/krb/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for Kerberos protocol analysis.
|
|
@ -1,4 +1,5 @@
|
|||
##! Implements base functionality for KRB analysis. Generates the krb.log file.
|
||||
##! Implements base functionality for KRB analysis. Generates the kerberos.log
|
||||
##! file.
|
||||
|
||||
module KRB;
|
||||
|
||||
|
|
1
scripts/base/protocols/mysql/README
Normal file
1
scripts/base/protocols/mysql/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for MySQL protocol analysis.
|
1
scripts/base/protocols/radius/README
Normal file
1
scripts/base/protocols/radius/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for RADIUS protocol analysis.
|
1
scripts/base/protocols/rdp/README
Normal file
1
scripts/base/protocols/rdp/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for Remote Desktop Protocol (RDP) analysis.
|
1
scripts/base/protocols/sip/README
Normal file
1
scripts/base/protocols/sip/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for Session Initiation Protocol (SIP) analysis.
|
1
scripts/base/protocols/ssh/README
Normal file
1
scripts/base/protocols/ssh/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for SSH protocol analysis.
|
File diff suppressed because one or more lines are too long
|
@ -6,23 +6,23 @@ const url_regex = /^([a-zA-Z\-]{3,5})(:\/\/[^\/?#"'\r\n><]*)([^?#"'\r\n><]*)([^[
|
|||
## A URI, as parsed by :bro:id:`decompose_uri`.
|
||||
type URI: record {
|
||||
## The URL's scheme..
|
||||
scheme: string &optional;
|
||||
scheme: string &optional;
|
||||
## The location, which could be a domain name or an IP address. Left empty if not
|
||||
## specified.
|
||||
netlocation: string;
|
||||
netlocation: string;
|
||||
## Port number, if included in URI.
|
||||
portnum: count &optional;
|
||||
portnum: count &optional;
|
||||
## Full including the file name. Will be '/' if there's not path given.
|
||||
path: string;
|
||||
path: string;
|
||||
## Full file name, including extension, if there is a file name.
|
||||
file_name: string &optional;
|
||||
file_name: string &optional;
|
||||
## The base filename, without extension, if there is a file name.
|
||||
file_base: string &optional;
|
||||
file_base: string &optional;
|
||||
## The filename's extension, if there is a file name.
|
||||
file_ext: string &optional;
|
||||
file_ext: string &optional;
|
||||
## A table of all query parameters, mapping their keys to values, if there's a
|
||||
## query.
|
||||
params: table[string] of string &optional;
|
||||
params: table[string] of string &optional;
|
||||
};
|
||||
|
||||
## Extracts URLs discovered in arbitrary text.
|
||||
|
@ -46,19 +46,19 @@ function find_all_urls_without_scheme(s: string): string_set
|
|||
return return_urls;
|
||||
}
|
||||
|
||||
function decompose_uri(s: string): URI
|
||||
function decompose_uri(uri: string): URI
|
||||
{
|
||||
local parts: string_vec;
|
||||
local u: URI = [$netlocation="", $path="/"];
|
||||
local u = URI($netlocation="", $path="/");
|
||||
local s = uri;
|
||||
|
||||
if ( /\?/ in s)
|
||||
if ( /\?/ in s )
|
||||
{
|
||||
# Parse query.
|
||||
u$params = table();
|
||||
|
||||
parts = split_string1(s, /\?/);
|
||||
s = parts[0];
|
||||
local query: string = parts[1];
|
||||
local query = parts[1];
|
||||
|
||||
if ( /&/ in query )
|
||||
{
|
||||
|
@ -73,7 +73,7 @@ function decompose_uri(s: string): URI
|
|||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
else if ( /=/ in query )
|
||||
{
|
||||
parts = split_string1(query, /=/);
|
||||
u$params[parts[0]] = parts[1];
|
||||
|
@ -97,14 +97,14 @@ function decompose_uri(s: string): URI
|
|||
|
||||
if ( |u$path| > 1 && u$path[|u$path| - 1] != "/" )
|
||||
{
|
||||
local last_token: string = find_last(u$path, /\/.+/);
|
||||
local last_token = find_last(u$path, /\/.+/);
|
||||
local full_filename = split_string1(last_token, /\//)[1];
|
||||
|
||||
if ( /\./ in full_filename )
|
||||
{
|
||||
u$file_name = full_filename;
|
||||
u$file_base = split_string1(full_filename, /\./)[0];
|
||||
u$file_ext = split_string1(full_filename, /\./)[1];
|
||||
u$file_ext = split_string1(full_filename, /\./)[1];
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -122,7 +122,9 @@ function decompose_uri(s: string): URI
|
|||
u$portnum = to_count(parts[1]);
|
||||
}
|
||||
else
|
||||
{
|
||||
u$netlocation = s;
|
||||
}
|
||||
|
||||
return u;
|
||||
}
|
||||
|
|
|
@ -1219,6 +1219,9 @@ void DNS_Mgr::IssueAsyncRequests()
|
|||
void DNS_Mgr::GetFds(iosource::FD_Set* read, iosource::FD_Set* write,
|
||||
iosource::FD_Set* except)
|
||||
{
|
||||
if ( ! nb_dns )
|
||||
return;
|
||||
|
||||
read->Insert(nb_dns_fd(nb_dns));
|
||||
}
|
||||
|
||||
|
@ -1358,6 +1361,9 @@ void DNS_Mgr::Process()
|
|||
|
||||
void DNS_Mgr::DoProcess(bool flush)
|
||||
{
|
||||
if ( ! nb_dns )
|
||||
return;
|
||||
|
||||
while ( asyncs_timeouts.size() > 0 )
|
||||
{
|
||||
AsyncRequest* req = asyncs_timeouts.top();
|
||||
|
@ -1422,6 +1428,9 @@ void DNS_Mgr::DoProcess(bool flush)
|
|||
|
||||
int DNS_Mgr::AnswerAvailable(int timeout)
|
||||
{
|
||||
if ( ! nb_dns )
|
||||
return -1;
|
||||
|
||||
int fd = nb_dns_fd(nb_dns);
|
||||
if ( fd < 0 )
|
||||
{
|
||||
|
|
|
@ -49,6 +49,7 @@ double tcp_partial_close_delay;
|
|||
int tcp_max_initial_window;
|
||||
int tcp_max_above_hole_without_any_acks;
|
||||
int tcp_excessive_data_without_further_acks;
|
||||
int tcp_max_old_segments;
|
||||
|
||||
RecordType* socks_address;
|
||||
|
||||
|
@ -354,6 +355,7 @@ void init_net_var()
|
|||
opt_internal_int("tcp_max_above_hole_without_any_acks");
|
||||
tcp_excessive_data_without_further_acks =
|
||||
opt_internal_int("tcp_excessive_data_without_further_acks");
|
||||
tcp_max_old_segments = opt_internal_int("tcp_max_old_segments");
|
||||
|
||||
socks_address = internal_type("SOCKS::Address")->AsRecordType();
|
||||
|
||||
|
|
|
@ -52,6 +52,7 @@ extern double tcp_reset_delay;
|
|||
extern int tcp_max_initial_window;
|
||||
extern int tcp_max_above_hole_without_any_acks;
|
||||
extern int tcp_excessive_data_without_further_acks;
|
||||
extern int tcp_max_old_segments;
|
||||
|
||||
extern RecordType* socks_address;
|
||||
|
||||
|
|
|
@ -34,12 +34,52 @@ uint64 Reassembler::total_size = 0;
|
|||
Reassembler::Reassembler(uint64 init_seq)
|
||||
{
|
||||
blocks = last_block = 0;
|
||||
old_blocks = last_old_block = 0;
|
||||
total_old_blocks = max_old_blocks = 0;
|
||||
trim_seq = last_reassem_seq = init_seq;
|
||||
}
|
||||
|
||||
Reassembler::~Reassembler()
|
||||
{
|
||||
ClearBlocks();
|
||||
ClearOldBlocks();
|
||||
}
|
||||
|
||||
void Reassembler::CheckOverlap(DataBlock *head, DataBlock *tail,
|
||||
uint64 seq, uint64 len, const u_char* data)
|
||||
{
|
||||
if ( ! head || ! tail )
|
||||
return;
|
||||
|
||||
uint64 upper = (seq + len);
|
||||
|
||||
for ( DataBlock* b = head; b; b = b->next )
|
||||
{
|
||||
uint64 nseq = seq;
|
||||
uint64 nupper = upper;
|
||||
const u_char* ndata = data;
|
||||
|
||||
if ( nupper <= b->seq )
|
||||
continue;
|
||||
|
||||
if ( nseq >= b->upper )
|
||||
continue;
|
||||
|
||||
if ( nseq < b->seq )
|
||||
{
|
||||
ndata += (b->seq - seq);
|
||||
nseq = b->seq;
|
||||
}
|
||||
|
||||
if ( nupper > b->upper )
|
||||
nupper = b->upper;
|
||||
|
||||
uint64 overlap_offset = (nseq - b->seq);
|
||||
uint64 overlap_len = (nupper - nseq);
|
||||
|
||||
if ( overlap_len )
|
||||
Overlap(&b->block[overlap_offset], ndata, overlap_len);
|
||||
}
|
||||
}
|
||||
|
||||
void Reassembler::NewBlock(double t, uint64 seq, uint64 len, const u_char* data)
|
||||
|
@ -49,10 +89,14 @@ void Reassembler::NewBlock(double t, uint64 seq, uint64 len, const u_char* data)
|
|||
|
||||
uint64 upper_seq = seq + len;
|
||||
|
||||
CheckOverlap(old_blocks, last_old_block, seq, len, data);
|
||||
|
||||
if ( upper_seq <= trim_seq )
|
||||
// Old data, don't do any work for it.
|
||||
return;
|
||||
|
||||
CheckOverlap(blocks, last_block, seq, len, data);
|
||||
|
||||
if ( seq < trim_seq )
|
||||
{ // Partially old data, just keep the good stuff.
|
||||
uint64 amount_old = trim_seq - seq;
|
||||
|
@ -119,7 +163,36 @@ uint64 Reassembler::TrimToSeq(uint64 seq)
|
|||
num_missing += seq - blocks->upper;
|
||||
}
|
||||
|
||||
delete blocks;
|
||||
if ( max_old_blocks )
|
||||
{
|
||||
// Move block over to old_blocks queue.
|
||||
blocks->next = 0;
|
||||
|
||||
if ( last_old_block )
|
||||
{
|
||||
blocks->prev = last_old_block;
|
||||
last_old_block->next = blocks;
|
||||
}
|
||||
else
|
||||
{
|
||||
blocks->prev = 0;
|
||||
old_blocks = blocks;
|
||||
}
|
||||
|
||||
last_old_block = blocks;
|
||||
total_old_blocks++;
|
||||
|
||||
while ( old_blocks && total_old_blocks > max_old_blocks )
|
||||
{
|
||||
DataBlock* next = old_blocks->next;
|
||||
delete old_blocks;
|
||||
old_blocks = next;
|
||||
total_old_blocks--;
|
||||
}
|
||||
}
|
||||
|
||||
else
|
||||
delete blocks;
|
||||
|
||||
blocks = b;
|
||||
}
|
||||
|
@ -156,6 +229,18 @@ void Reassembler::ClearBlocks()
|
|||
last_block = 0;
|
||||
}
|
||||
|
||||
void Reassembler::ClearOldBlocks()
|
||||
{
|
||||
while ( old_blocks )
|
||||
{
|
||||
DataBlock* b = old_blocks->next;
|
||||
delete old_blocks;
|
||||
old_blocks = b;
|
||||
}
|
||||
|
||||
last_old_block = 0;
|
||||
}
|
||||
|
||||
uint64 Reassembler::TotalSize() const
|
||||
{
|
||||
uint64 size = 0;
|
||||
|
@ -218,7 +303,7 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
|
|||
return new_b;
|
||||
}
|
||||
|
||||
// The blocks overlap, complain.
|
||||
// The blocks overlap.
|
||||
if ( seq < b->seq )
|
||||
{
|
||||
// The new block has a prefix that comes before b.
|
||||
|
@ -239,8 +324,6 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
|
|||
uint64 b_len = b->upper - overlap_start;
|
||||
uint64 overlap_len = min(new_b_len, b_len);
|
||||
|
||||
Overlap(&b->block[overlap_offset], data, overlap_len);
|
||||
|
||||
if ( overlap_len < new_b_len )
|
||||
{
|
||||
// Recurse to resolve remainder of the new data.
|
||||
|
|
|
@ -36,6 +36,7 @@ public:
|
|||
|
||||
// Delete all held blocks.
|
||||
void ClearBlocks();
|
||||
void ClearOldBlocks();
|
||||
|
||||
int HasBlocks() const { return blocks != 0; }
|
||||
uint64 LastReassemSeq() const { return last_reassem_seq; }
|
||||
|
@ -50,6 +51,8 @@ public:
|
|||
// Sum over all data buffered in some reassembler.
|
||||
static uint64 TotalMemoryAllocation() { return total_size; }
|
||||
|
||||
void SetMaxOldBlocks(uint32 count) { max_old_blocks = count; }
|
||||
|
||||
protected:
|
||||
Reassembler() { }
|
||||
|
||||
|
@ -65,10 +68,19 @@ protected:
|
|||
DataBlock* AddAndCheck(DataBlock* b, uint64 seq,
|
||||
uint64 upper, const u_char* data);
|
||||
|
||||
void CheckOverlap(DataBlock *head, DataBlock *tail,
|
||||
uint64 seq, uint64 len, const u_char* data);
|
||||
|
||||
DataBlock* blocks;
|
||||
DataBlock* last_block;
|
||||
|
||||
DataBlock* old_blocks;
|
||||
DataBlock* last_old_block;
|
||||
|
||||
uint64 last_reassem_seq;
|
||||
uint64 trim_seq; // how far we've trimmed
|
||||
uint32 max_old_blocks;
|
||||
uint32 total_old_blocks;
|
||||
|
||||
static uint64 total_size;
|
||||
};
|
||||
|
|
|
@ -1025,8 +1025,11 @@ void HTTP_Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)
|
|||
}
|
||||
else
|
||||
{
|
||||
ProtocolViolation("not a http reply line");
|
||||
reply_state = EXPECT_REPLY_NOTHING;
|
||||
if ( line != end_of_line )
|
||||
{
|
||||
ProtocolViolation("not a http reply line");
|
||||
reply_state = EXPECT_REPLY_NOTHING;
|
||||
}
|
||||
}
|
||||
|
||||
break;
|
||||
|
|
|
@ -141,10 +141,9 @@ int fputs(data_chunk_t b, FILE* fp)
|
|||
|
||||
void MIME_Mail::Undelivered(int len)
|
||||
{
|
||||
// is_orig param not available, doesn't matter as long as it's consistent
|
||||
cur_entity_id = file_mgr->Gap(cur_entity_len, len,
|
||||
analyzer->GetAnalyzerTag(), analyzer->Conn(),
|
||||
false, cur_entity_id);
|
||||
is_orig, cur_entity_id);
|
||||
}
|
||||
|
||||
int strcasecmp_n(data_chunk_t s, const char* t)
|
||||
|
@ -246,11 +245,16 @@ int MIME_get_field_name(int len, const char* data, data_chunk_t* name)
|
|||
}
|
||||
|
||||
// See RFC 2045, page 12.
|
||||
int MIME_is_tspecial (char ch)
|
||||
int MIME_is_tspecial (char ch, bool is_boundary = false)
|
||||
{
|
||||
return ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' ||
|
||||
ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
|
||||
ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=';
|
||||
if ( is_boundary )
|
||||
return ch == '(' || ch == ')' || ch == '@' ||
|
||||
ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
|
||||
ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=';
|
||||
else
|
||||
return ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' ||
|
||||
ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
|
||||
ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=';
|
||||
}
|
||||
|
||||
int MIME_is_field_name_char (char ch)
|
||||
|
@ -258,26 +262,27 @@ int MIME_is_field_name_char (char ch)
|
|||
return ch >= 33 && ch <= 126 && ch != ':';
|
||||
}
|
||||
|
||||
int MIME_is_token_char (char ch)
|
||||
int MIME_is_token_char (char ch, bool is_boundary = false)
|
||||
{
|
||||
return ch >= 33 && ch <= 126 && ! MIME_is_tspecial(ch);
|
||||
return ch >= 33 && ch <= 126 && ! MIME_is_tspecial(ch, is_boundary);
|
||||
}
|
||||
|
||||
// See RFC 2045, page 12.
|
||||
// A token is composed of characters that are not SPACE, CTLs or tspecials
|
||||
int MIME_get_token(int len, const char* data, data_chunk_t* token)
|
||||
int MIME_get_token(int len, const char* data, data_chunk_t* token,
|
||||
bool is_boundary)
|
||||
{
|
||||
int i = MIME_skip_lws_comments(len, data);
|
||||
while ( i < len )
|
||||
{
|
||||
int j;
|
||||
|
||||
if ( MIME_is_token_char(data[i]) )
|
||||
if ( MIME_is_token_char(data[i], is_boundary) )
|
||||
{
|
||||
token->data = (data + i);
|
||||
for ( j = i; j < len; ++j )
|
||||
{
|
||||
if ( ! MIME_is_token_char(data[j]) )
|
||||
if ( ! MIME_is_token_char(data[j], is_boundary) )
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -359,7 +364,7 @@ int MIME_get_quoted_string(int len, const char* data, data_chunk_t* str)
|
|||
return -1;
|
||||
}
|
||||
|
||||
int MIME_get_value(int len, const char* data, BroString*& buf)
|
||||
int MIME_get_value(int len, const char* data, BroString*& buf, bool is_boundary)
|
||||
{
|
||||
int offset = MIME_skip_lws_comments(len, data);
|
||||
|
||||
|
@ -380,7 +385,7 @@ int MIME_get_value(int len, const char* data, BroString*& buf)
|
|||
else
|
||||
{
|
||||
data_chunk_t str;
|
||||
int end = MIME_get_token(len, data, &str);
|
||||
int end = MIME_get_token(len, data, &str, is_boundary);
|
||||
if ( end < 0 )
|
||||
return -1;
|
||||
|
||||
|
@ -863,8 +868,22 @@ int MIME_Entity::ParseFieldParameters(int len, const char* data)
|
|||
len -= offset;
|
||||
|
||||
BroString* val = 0;
|
||||
// token or quoted-string
|
||||
offset = MIME_get_value(len, data, val);
|
||||
|
||||
if ( current_field_type == MIME_CONTENT_TYPE &&
|
||||
content_type == CONTENT_TYPE_MULTIPART &&
|
||||
strcasecmp_n(attr, "boundary") == 0 )
|
||||
{
|
||||
// token or quoted-string (and some lenience for characters
|
||||
// not explicitly allowed by the RFC, but encountered in the wild)
|
||||
offset = MIME_get_value(len, data, val, true);
|
||||
data_chunk_t vd = get_data_chunk(val);
|
||||
multipart_boundary = new BroString((const u_char*)vd.data,
|
||||
vd.length, 1);
|
||||
}
|
||||
else
|
||||
// token or quoted-string
|
||||
offset = MIME_get_value(len, data, val);
|
||||
|
||||
if ( offset < 0 )
|
||||
{
|
||||
IllegalFormat("value not found in parameter specification");
|
||||
|
@ -874,8 +893,6 @@ int MIME_Entity::ParseFieldParameters(int len, const char* data)
|
|||
|
||||
data += offset;
|
||||
len -= offset;
|
||||
|
||||
ParseParameter(attr, get_data_chunk(val));
|
||||
delete val;
|
||||
}
|
||||
|
||||
|
@ -920,24 +937,6 @@ void MIME_Entity::ParseContentEncoding(data_chunk_t encoding_mechanism)
|
|||
content_encoding = i;
|
||||
}
|
||||
|
||||
void MIME_Entity::ParseParameter(data_chunk_t attr, data_chunk_t val)
|
||||
{
|
||||
switch ( current_field_type ) {
|
||||
case MIME_CONTENT_TYPE:
|
||||
if ( content_type == CONTENT_TYPE_MULTIPART &&
|
||||
strcasecmp_n(attr, "boundary") == 0 )
|
||||
multipart_boundary = new BroString((const u_char*)val.data, val.length, 1);
|
||||
break;
|
||||
|
||||
case MIME_CONTENT_TRANSFER_ENCODING:
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int MIME_Entity::CheckBoundaryDelimiter(int len, const char* data)
|
||||
{
|
||||
if ( ! multipart_boundary )
|
||||
|
@ -1286,13 +1285,15 @@ TableVal* MIME_Message::BuildHeaderTable(MIME_HeaderList& hlist)
|
|||
return t;
|
||||
}
|
||||
|
||||
MIME_Mail::MIME_Mail(analyzer::Analyzer* mail_analyzer, int buf_size)
|
||||
MIME_Mail::MIME_Mail(analyzer::Analyzer* mail_analyzer, bool orig, int buf_size)
|
||||
: MIME_Message(mail_analyzer), md5_hash()
|
||||
{
|
||||
analyzer = mail_analyzer;
|
||||
|
||||
min_overlap_length = mime_segment_overlap_length;
|
||||
max_chunk_length = mime_segment_length;
|
||||
is_orig = orig;
|
||||
|
||||
int length = buf_size;
|
||||
|
||||
if ( min_overlap_length < 0 )
|
||||
|
@ -1456,9 +1457,8 @@ void MIME_Mail::SubmitData(int len, const char* buf)
|
|||
analyzer->ConnectionEvent(mime_segment_data, vl);
|
||||
}
|
||||
|
||||
// is_orig param not available, doesn't matter as long as it's consistent
|
||||
cur_entity_id = file_mgr->DataIn(reinterpret_cast<const u_char*>(buf), len,
|
||||
analyzer->GetAnalyzerTag(), analyzer->Conn(), false,
|
||||
analyzer->GetAnalyzerTag(), analyzer->Conn(), is_orig,
|
||||
cur_entity_id);
|
||||
|
||||
cur_entity_len += len;
|
||||
|
|
|
@ -117,7 +117,6 @@ protected:
|
|||
|
||||
void ParseContentType(data_chunk_t type, data_chunk_t sub_type);
|
||||
void ParseContentEncoding(data_chunk_t encoding_mechanism);
|
||||
void ParseParameter(data_chunk_t attr, data_chunk_t val);
|
||||
|
||||
void BeginBody();
|
||||
void NewDataLine(int len, const char* data, int trailing_CRLF);
|
||||
|
@ -231,7 +230,7 @@ protected:
|
|||
|
||||
class MIME_Mail : public MIME_Message {
|
||||
public:
|
||||
MIME_Mail(analyzer::Analyzer* mail_conn, int buf_size = 0);
|
||||
MIME_Mail(analyzer::Analyzer* mail_conn, bool is_orig, int buf_size = 0);
|
||||
~MIME_Mail();
|
||||
void Done();
|
||||
|
||||
|
@ -248,6 +247,7 @@ public:
|
|||
protected:
|
||||
int min_overlap_length;
|
||||
int max_chunk_length;
|
||||
bool is_orig;
|
||||
int buffer_start;
|
||||
int data_start;
|
||||
int compute_content_hash;
|
||||
|
@ -275,9 +275,11 @@ extern int MIME_count_leading_lws(int len, const char* data);
|
|||
extern int MIME_count_trailing_lws(int len, const char* data);
|
||||
extern int MIME_skip_comments(int len, const char* data);
|
||||
extern int MIME_skip_lws_comments(int len, const char* data);
|
||||
extern int MIME_get_token(int len, const char* data, data_chunk_t* token);
|
||||
extern int MIME_get_token(int len, const char* data, data_chunk_t* token,
|
||||
bool is_boundary = false);
|
||||
extern int MIME_get_slash_token_pair(int len, const char* data, data_chunk_t* first, data_chunk_t* second);
|
||||
extern int MIME_get_value(int len, const char* data, BroString*& buf);
|
||||
extern int MIME_get_value(int len, const char* data, BroString*& buf,
|
||||
bool is_boundary = false);
|
||||
extern int MIME_get_field_name(int len, const char* data, data_chunk_t* name);
|
||||
extern BroString* MIME_decode_quoted_pairs(data_chunk_t buf);
|
||||
|
||||
|
|
|
@ -47,6 +47,42 @@
|
|||
|
||||
%}
|
||||
|
||||
refine connection ModbusTCP_Conn += {
|
||||
%member{
|
||||
// Fields used to determine if the protocol has been confirmed or not.
|
||||
bool confirmed;
|
||||
bool orig_pdu;
|
||||
bool resp_pdu;
|
||||
%}
|
||||
|
||||
%init{
|
||||
confirmed = false;
|
||||
orig_pdu = false;
|
||||
resp_pdu = false;
|
||||
%}
|
||||
|
||||
function SetPDU(is_orig: bool): bool
|
||||
%{
|
||||
if ( is_orig )
|
||||
orig_pdu = true;
|
||||
else
|
||||
resp_pdu = true;
|
||||
|
||||
return true;
|
||||
%}
|
||||
|
||||
function SetConfirmed(): bool
|
||||
%{
|
||||
confirmed = true;
|
||||
return true;
|
||||
%}
|
||||
|
||||
function IsConfirmed(): bool
|
||||
%{
|
||||
return confirmed && orig_pdu && resp_pdu;
|
||||
%}
|
||||
};
|
||||
|
||||
refine flow ModbusTCP_Flow += {
|
||||
|
||||
function deliver_message(header: ModbusTCP_TransportHeader): bool
|
||||
|
@ -62,6 +98,21 @@ refine flow ModbusTCP_Flow += {
|
|||
return true;
|
||||
%}
|
||||
|
||||
function deliver_ModbusTCP_PDU(message: ModbusTCP_PDU): bool
|
||||
%{
|
||||
// We will assume that if an entire PDU from both sides
|
||||
// is successfully parsed then this is definitely modbus.
|
||||
connection()->SetPDU(${message.is_orig});
|
||||
|
||||
if ( ! connection()->IsConfirmed() )
|
||||
{
|
||||
connection()->SetConfirmed();
|
||||
connection()->bro_analyzer()->ProtocolConfirmation();
|
||||
}
|
||||
|
||||
return true;
|
||||
%}
|
||||
|
||||
# EXCEPTION
|
||||
function deliver_Exception(header: ModbusTCP_TransportHeader, message: Exception): bool
|
||||
%{
|
||||
|
|
|
@ -64,6 +64,8 @@ type ModbusTCP_PDU(is_orig: bool) = record {
|
|||
true -> request: ModbusTCP_Request(header);
|
||||
false -> response: ModbusTCP_Response(header);
|
||||
};
|
||||
} &let {
|
||||
deliver: bool = $context.flow.deliver_ModbusTCP_PDU(this);
|
||||
} &length=header.len+6, &byteorder=bigendian;
|
||||
|
||||
type ModbusTCP_TransportHeader = record {
|
||||
|
|
|
@ -712,7 +712,8 @@ void POP3_Analyzer::ProcessReply(int length, const char* line)
|
|||
{
|
||||
int data_len = end_of_line - line;
|
||||
if ( ! mail )
|
||||
BeginData();
|
||||
// ProcessReply is only called if orig == false
|
||||
BeginData(false);
|
||||
ProcessData(data_len, line);
|
||||
if ( requestForMultiLine == true )
|
||||
multiLine = true;
|
||||
|
@ -838,10 +839,10 @@ void POP3_Analyzer::AuthSuccessfull()
|
|||
user.c_str(), password.c_str());
|
||||
}
|
||||
|
||||
void POP3_Analyzer::BeginData()
|
||||
void POP3_Analyzer::BeginData(bool orig)
|
||||
{
|
||||
delete mail;
|
||||
mail = new mime::MIME_Mail(this);
|
||||
mail = new mime::MIME_Mail(this, orig);
|
||||
}
|
||||
|
||||
void POP3_Analyzer::EndData()
|
||||
|
|
|
@ -95,7 +95,7 @@ protected:
|
|||
void NotAllowed(const char* cmd, const char* state);
|
||||
void ProcessClientCmd();
|
||||
void FinishClientCmd();
|
||||
void BeginData();
|
||||
void BeginData(bool orig);
|
||||
void ProcessData(int length, const char* line);
|
||||
void EndData();
|
||||
void StartTLS();
|
||||
|
|
|
@ -45,7 +45,7 @@ SMTP_Analyzer::SMTP_Analyzer(Connection* conn)
|
|||
orig_is_sender = true;
|
||||
line_after_gap = 0;
|
||||
mail = 0;
|
||||
UpdateState(first_cmd, 0);
|
||||
UpdateState(first_cmd, 0, true);
|
||||
cl_orig = new tcp::ContentLine_Analyzer(conn, true);
|
||||
cl_orig->SetIsNULSensitive(true);
|
||||
cl_orig->SetSkipPartial(true);
|
||||
|
@ -214,7 +214,7 @@ void SMTP_Analyzer::ProcessLine(int length, const char* line, bool orig)
|
|||
// but are now processing packets sent
|
||||
// afterwards (because, e.g., the RST was
|
||||
// dropped or ignored).
|
||||
BeginData();
|
||||
BeginData(orig);
|
||||
|
||||
ProcessData(data_len, line);
|
||||
|
||||
|
@ -264,7 +264,7 @@ void SMTP_Analyzer::ProcessLine(int length, const char* line, bool orig)
|
|||
// RequestEvent() in different orders for the
|
||||
// two commands.
|
||||
if ( cmd_code == SMTP_CMD_END_OF_DATA )
|
||||
UpdateState(cmd_code, 0);
|
||||
UpdateState(cmd_code, 0, orig);
|
||||
|
||||
if ( smtp_request )
|
||||
{
|
||||
|
@ -273,7 +273,7 @@ void SMTP_Analyzer::ProcessLine(int length, const char* line, bool orig)
|
|||
}
|
||||
|
||||
if ( cmd_code != SMTP_CMD_END_OF_DATA )
|
||||
UpdateState(cmd_code, 0);
|
||||
UpdateState(cmd_code, 0, orig);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -312,7 +312,7 @@ void SMTP_Analyzer::ProcessLine(int length, const char* line, bool orig)
|
|||
|
||||
if ( ! pending_reply && reply_code >= 0 )
|
||||
// It is not a continuation.
|
||||
NewReply(reply_code);
|
||||
NewReply(reply_code, orig);
|
||||
|
||||
// Update pending_reply.
|
||||
if ( reply_code >= 0 && length > 3 && line[3] == '-' )
|
||||
|
@ -419,7 +419,7 @@ void SMTP_Analyzer::StartTLS()
|
|||
// we want to understand the behavior of SMTP and check how far it may
|
||||
// deviate from our knowledge.
|
||||
|
||||
void SMTP_Analyzer::NewReply(const int reply_code)
|
||||
void SMTP_Analyzer::NewReply(const int reply_code, bool orig)
|
||||
{
|
||||
if ( state == SMTP_AFTER_GAP && reply_code > 0 )
|
||||
{
|
||||
|
@ -447,7 +447,7 @@ void SMTP_Analyzer::NewReply(const int reply_code)
|
|||
pending_cmd_q.pop_front();
|
||||
}
|
||||
|
||||
UpdateState(cmd_code, reply_code);
|
||||
UpdateState(cmd_code, reply_code, orig);
|
||||
}
|
||||
|
||||
// Note: reply_code == 0 means we haven't seen the reply, in which case we
|
||||
|
@ -457,7 +457,7 @@ void SMTP_Analyzer::NewReply(const int reply_code)
|
|||
// in the RPC), and as a result we have to update the state following
|
||||
// the commands in addition to the replies.
|
||||
|
||||
void SMTP_Analyzer::UpdateState(const int cmd_code, const int reply_code)
|
||||
void SMTP_Analyzer::UpdateState(const int cmd_code, const int reply_code, bool orig)
|
||||
{
|
||||
const int st = state;
|
||||
|
||||
|
@ -588,7 +588,7 @@ void SMTP_Analyzer::UpdateState(const int cmd_code, const int reply_code)
|
|||
case 0:
|
||||
if ( state != SMTP_RCPT_OK )
|
||||
UnexpectedCommand(cmd_code, reply_code);
|
||||
BeginData();
|
||||
BeginData(orig);
|
||||
break;
|
||||
|
||||
case 354:
|
||||
|
@ -786,7 +786,7 @@ void SMTP_Analyzer::UpdateState(const int cmd_code, const int reply_code)
|
|||
default:
|
||||
if ( st == SMTP_GAP_RECOVERY && reply_code == 354 )
|
||||
{
|
||||
BeginData();
|
||||
BeginData(orig);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
@ -890,7 +890,7 @@ void SMTP_Analyzer::ProcessData(int length, const char* line)
|
|||
mail->Deliver(length, line, 1 /* trailing_CRLF */);
|
||||
}
|
||||
|
||||
void SMTP_Analyzer::BeginData()
|
||||
void SMTP_Analyzer::BeginData(bool orig)
|
||||
{
|
||||
state = SMTP_IN_DATA;
|
||||
skip_data = 0; // reset the flag at the beginning of the mail
|
||||
|
@ -901,7 +901,7 @@ void SMTP_Analyzer::BeginData()
|
|||
delete mail;
|
||||
}
|
||||
|
||||
mail = new mime::MIME_Mail(this);
|
||||
mail = new mime::MIME_Mail(this, orig);
|
||||
}
|
||||
|
||||
void SMTP_Analyzer::EndData()
|
||||
|
|
|
@ -58,13 +58,13 @@ protected:
|
|||
|
||||
void ProcessLine(int length, const char* line, bool orig);
|
||||
void NewCmd(const int cmd_code);
|
||||
void NewReply(const int reply_code);
|
||||
void NewReply(const int reply_code, bool orig);
|
||||
void ProcessExtension(int ext_len, const char* ext);
|
||||
void ProcessData(int length, const char* line);
|
||||
|
||||
void UpdateState(const int cmd_code, const int reply_code);
|
||||
void UpdateState(const int cmd_code, const int reply_code, bool orig);
|
||||
|
||||
void BeginData();
|
||||
void BeginData(bool orig);
|
||||
void EndData();
|
||||
|
||||
int ParseCmd(int cmd_len, const char* cmd);
|
||||
|
|
|
@ -201,13 +201,18 @@ int TCP_Endpoint::DataSent(double t, uint64 seq, int len, int caplen,
|
|||
{
|
||||
int status = 0;
|
||||
|
||||
if ( contents_processor && caplen >= len )
|
||||
status = contents_processor->DataSent(t, seq, len, data);
|
||||
if ( contents_processor )
|
||||
{
|
||||
if ( caplen >= len )
|
||||
status = contents_processor->DataSent(t, seq, len, data);
|
||||
else
|
||||
TCP()->Weird("truncated_tcp_payload");
|
||||
}
|
||||
|
||||
if ( caplen <= 0 )
|
||||
return status;
|
||||
|
||||
if ( contents_file && ! contents_processor &&
|
||||
if ( contents_file && ! contents_processor &&
|
||||
seq + len > contents_start_seq )
|
||||
{
|
||||
int64 under_seq = contents_start_seq - seq;
|
||||
|
|
|
@ -42,6 +42,9 @@ TCP_Reassembler::TCP_Reassembler(analyzer::Analyzer* arg_dst_analyzer,
|
|||
seq_to_skip = 0;
|
||||
in_delivery = false;
|
||||
|
||||
if ( tcp_max_old_segments )
|
||||
SetMaxOldBlocks(tcp_max_old_segments);
|
||||
|
||||
if ( tcp_contents )
|
||||
{
|
||||
// Val dst_port_val(ntohs(Conn()->RespPort()), TYPE_PORT);
|
||||
|
|
|
@ -22,10 +22,9 @@ ZIP_Analyzer::ZIP_Analyzer(Connection* conn, bool orig, Method arg_method)
|
|||
zip->next_in = 0;
|
||||
zip->avail_in = 0;
|
||||
|
||||
// "15" here means maximum compression. "32" is a gross overload
|
||||
// hack that means "check it for whether it's a gzip file". Sheesh.
|
||||
zip_status = inflateInit2(zip, 15 + 32);
|
||||
if ( zip_status != Z_OK )
|
||||
// "32" is a gross overload hack that means "check it
|
||||
// for whether it's a gzip file". Sheesh.
|
||||
if ( inflateInit2(zip, MAX_WBITS + 32) != Z_OK )
|
||||
{
|
||||
Weird("inflate_init_failed");
|
||||
delete zip;
|
||||
|
@ -56,38 +55,63 @@ void ZIP_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
|
|||
static unsigned int unzip_size = 4096;
|
||||
Bytef unzipbuf[unzip_size];
|
||||
|
||||
int allow_restart = 1;
|
||||
|
||||
zip->next_in = (Bytef*) data;
|
||||
zip->avail_in = len;
|
||||
|
||||
do
|
||||
Bytef *orig_next_in = zip->next_in;
|
||||
size_t orig_avail_in = zip->avail_in;
|
||||
|
||||
while ( true )
|
||||
{
|
||||
zip->next_out = unzipbuf;
|
||||
zip->avail_out = unzip_size;
|
||||
|
||||
zip_status = inflate(zip, Z_SYNC_FLUSH);
|
||||
|
||||
if ( zip_status != Z_STREAM_END &&
|
||||
zip_status != Z_OK &&
|
||||
zip_status != Z_BUF_ERROR )
|
||||
if ( zip_status == Z_STREAM_END ||
|
||||
zip_status == Z_OK )
|
||||
{
|
||||
allow_restart = 0;
|
||||
|
||||
int have = unzip_size - zip->avail_out;
|
||||
if ( have )
|
||||
ForwardStream(have, unzipbuf, IsOrig());
|
||||
|
||||
if ( zip_status == Z_STREAM_END )
|
||||
{
|
||||
inflateEnd(zip);
|
||||
return;
|
||||
}
|
||||
|
||||
if ( zip->avail_in == 0 )
|
||||
return;
|
||||
|
||||
}
|
||||
|
||||
else if ( allow_restart && zip_status == Z_DATA_ERROR )
|
||||
{
|
||||
// Some servers seem to not generate zlib headers,
|
||||
// so this is an attempt to fix and continue anyway.
|
||||
inflateEnd(zip);
|
||||
|
||||
if ( inflateInit2(zip, -MAX_WBITS) != Z_OK )
|
||||
{
|
||||
Weird("inflate_init_failed");
|
||||
return;
|
||||
}
|
||||
|
||||
zip->next_in = orig_next_in;
|
||||
zip->avail_in = orig_avail_in;
|
||||
allow_restart = 0;
|
||||
continue;
|
||||
}
|
||||
|
||||
else
|
||||
{
|
||||
Weird("inflate_failed");
|
||||
inflateEnd(zip);
|
||||
break;
|
||||
return;
|
||||
}
|
||||
|
||||
int have = unzip_size - zip->avail_out;
|
||||
if ( have )
|
||||
ForwardStream(have, unzipbuf, IsOrig());
|
||||
|
||||
if ( zip_status == Z_STREAM_END )
|
||||
{
|
||||
inflateEnd(zip);
|
||||
delete zip;
|
||||
zip = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
zip_status = Z_OK;
|
||||
}
|
||||
while ( zip->avail_out == 0 );
|
||||
}
|
||||
|
|
|
@ -24,6 +24,12 @@ int bro_broker::Manager::send_flags_self_idx;
|
|||
int bro_broker::Manager::send_flags_peers_idx;
|
||||
int bro_broker::Manager::send_flags_unsolicited_idx;
|
||||
|
||||
bro_broker::Manager::Manager()
|
||||
: iosource::IOSource(), next_timestamp(-1)
|
||||
{
|
||||
SetIdle(true);
|
||||
}
|
||||
|
||||
bro_broker::Manager::~Manager()
|
||||
{
|
||||
vector<decltype(data_stores)::key_type> stores_to_close;
|
||||
|
@ -560,8 +566,10 @@ void bro_broker::Manager::GetFds(iosource::FD_Set* read, iosource::FD_Set* write
|
|||
|
||||
double bro_broker::Manager::NextTimestamp(double* local_network_time)
|
||||
{
|
||||
// TODO: do something better?
|
||||
return timer_mgr->Time();
|
||||
if ( next_timestamp < 0 )
|
||||
next_timestamp = timer_mgr->Time();
|
||||
|
||||
return next_timestamp;
|
||||
}
|
||||
|
||||
struct response_converter {
|
||||
|
@ -619,7 +627,6 @@ static RecordVal* response_to_val(broker::store::response r)
|
|||
|
||||
void bro_broker::Manager::Process()
|
||||
{
|
||||
bool idle = true;
|
||||
auto outgoing_connection_updates =
|
||||
endpoint->outgoing_connection_status().want_pop();
|
||||
auto incoming_connection_updates =
|
||||
|
@ -630,8 +637,6 @@ void bro_broker::Manager::Process()
|
|||
|
||||
for ( auto& u : outgoing_connection_updates )
|
||||
{
|
||||
idle = false;
|
||||
|
||||
switch ( u.status ) {
|
||||
case broker::outgoing_connection_status::tag::established:
|
||||
if ( BrokerComm::outgoing_connection_established )
|
||||
|
@ -677,8 +682,6 @@ void bro_broker::Manager::Process()
|
|||
|
||||
for ( auto& u : incoming_connection_updates )
|
||||
{
|
||||
idle = false;
|
||||
|
||||
switch ( u.status ) {
|
||||
case broker::incoming_connection_status::tag::established:
|
||||
if ( BrokerComm::incoming_connection_established )
|
||||
|
@ -714,7 +717,6 @@ void bro_broker::Manager::Process()
|
|||
continue;
|
||||
|
||||
ps.second.received += print_messages.size();
|
||||
idle = false;
|
||||
|
||||
if ( ! BrokerComm::print_handler )
|
||||
continue;
|
||||
|
@ -751,7 +753,6 @@ void bro_broker::Manager::Process()
|
|||
continue;
|
||||
|
||||
es.second.received += event_messages.size();
|
||||
idle = false;
|
||||
|
||||
for ( auto& em : event_messages )
|
||||
{
|
||||
|
@ -822,7 +823,6 @@ void bro_broker::Manager::Process()
|
|||
continue;
|
||||
|
||||
ls.second.received += log_messages.size();
|
||||
idle = false;
|
||||
|
||||
for ( auto& lm : log_messages )
|
||||
{
|
||||
|
@ -890,7 +890,6 @@ void bro_broker::Manager::Process()
|
|||
continue;
|
||||
|
||||
statistics.report_count += responses.size();
|
||||
idle = false;
|
||||
|
||||
for ( auto& response : responses )
|
||||
{
|
||||
|
@ -940,8 +939,6 @@ void bro_broker::Manager::Process()
|
|||
|
||||
for ( auto& report : reports )
|
||||
{
|
||||
idle = false;
|
||||
|
||||
if ( report.size() < 2 )
|
||||
{
|
||||
reporter->Warning("got broker report msg of size %zu, expect 4",
|
||||
|
@ -979,7 +976,7 @@ void bro_broker::Manager::Process()
|
|||
}
|
||||
}
|
||||
|
||||
SetIdle(idle);
|
||||
next_timestamp = -1;
|
||||
}
|
||||
|
||||
bool bro_broker::Manager::AddStore(StoreHandleVal* handle)
|
||||
|
|
|
@ -50,6 +50,11 @@ class Manager : public iosource::IOSource {
|
|||
friend class StoreHandleVal;
|
||||
public:
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*/
|
||||
Manager();
|
||||
|
||||
/**
|
||||
* Destructor. Any still-pending data store queries are aborted.
|
||||
*/
|
||||
|
@ -351,6 +356,7 @@ private:
|
|||
std::unordered_set<StoreQueryCallback*> pending_queries;
|
||||
|
||||
Stats statistics;
|
||||
double next_timestamp;
|
||||
|
||||
static VectorType* vector_of_data_type;
|
||||
static EnumType* log_id_type;
|
||||
|
|
|
@ -282,7 +282,8 @@ event packet_contents%(c: connection, contents: string%);
|
|||
## reassembling a TCP stream, Bro buffers all payload until it sees the
|
||||
## responder acking it. If during that time, the sender resends a chunk of
|
||||
## payload but with different content than originally, this event will be
|
||||
## raised.
|
||||
## raised. In addition, if :bro:id:`tcp_max_old_segments` is larger than zero,
|
||||
## mismatches with that older still-buffered data will likewise trigger the event.
|
||||
##
|
||||
## c: The connection showing the inconsistency.
|
||||
##
|
||||
|
|
|
@ -5,14 +5,14 @@
|
|||
%}
|
||||
|
||||
%header{
|
||||
VectorVal* process_rvas(const RVAS* rvas, const uint16 size);
|
||||
VectorVal* process_rvas(const RVAS* rvas);
|
||||
%}
|
||||
|
||||
%code{
|
||||
VectorVal* process_rvas(const RVAS* rva_table, const uint16 size)
|
||||
VectorVal* process_rvas(const RVAS* rva_table)
|
||||
{
|
||||
VectorVal* rvas = new VectorVal(internal_type("index_vec")->AsVectorType());
|
||||
for ( uint16 i=0; i < size; ++i )
|
||||
for ( uint16 i=0; i < rva_table->rvas()->size(); ++i )
|
||||
rvas->Assign(i, new Val((*rva_table->rvas())[i]->size(), TYPE_COUNT));
|
||||
|
||||
return rvas;
|
||||
|
@ -149,7 +149,7 @@ refine flow File += {
|
|||
oh->Assign(21, new Val(${h.subsystem}, TYPE_COUNT));
|
||||
oh->Assign(22, characteristics_to_bro(${h.dll_characteristics}, 16));
|
||||
|
||||
oh->Assign(23, process_rvas(${h.rvas}, ${h.number_of_rva_and_sizes}));
|
||||
oh->Assign(23, process_rvas(${h.rvas}));
|
||||
|
||||
BifEvent::generate_pe_optional_header((analyzer::Analyzer *) connection()->bro_analyzer(),
|
||||
connection()->bro_analyzer()->GetFile()->GetVal()->Ref(),
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// See the file "COPYING" in the main distribution directory for copyright.
|
||||
|
||||
#ifndef INPUT_READERS_POSTGRES_H
|
||||
#define INPUT_READERS_POSTGRES_H
|
||||
#ifndef INPUT_READERS_SQLITE_H
|
||||
#define INPUT_READERS_SQLITE_H
|
||||
|
||||
#include "config.h"
|
||||
|
||||
|
@ -50,5 +50,5 @@ private:
|
|||
}
|
||||
}
|
||||
|
||||
#endif /* INPUT_READERS_POSTGRES_H */
|
||||
#endif /* INPUT_READERS_SQLITE_H */
|
||||
|
||||
|
|
|
@ -118,9 +118,6 @@ IOSource* Manager::FindSoonest(double* ts)
|
|||
|
||||
src->Clear();
|
||||
src->src->GetFds(&src->fd_read, &src->fd_write, &src->fd_except);
|
||||
if ( src->fd_read.Empty() ) src->fd_read.Insert(0);
|
||||
if ( src->fd_write.Empty() ) src->fd_write.Insert(0);
|
||||
if ( src->fd_except.Empty() ) src->fd_except.Insert(0);
|
||||
src->SetFds(&fd_read, &fd_write, &fd_except, &maxx);
|
||||
}
|
||||
|
||||
|
|
|
@ -240,6 +240,18 @@ void PktSrc::GetFds(iosource::FD_Set* read, iosource::FD_Set* write,
|
|||
|
||||
if ( IsOpen() && props.selectable_fd >= 0 )
|
||||
read->Insert(props.selectable_fd);
|
||||
|
||||
// TODO: This seems like a hack that should be removed, but doing so
|
||||
// causes the main run loop to spin more frequently and increase cpu usage.
|
||||
// See also commit 9cd85be308.
|
||||
if ( read->Empty() )
|
||||
read->Insert(0);
|
||||
|
||||
if ( write->Empty() )
|
||||
write->Insert(0);
|
||||
|
||||
if ( except->Empty() )
|
||||
except->Insert(0);
|
||||
}
|
||||
|
||||
double PktSrc::NextTimestamp(double* local_network_time)
|
||||
|
|
45
src/main.cc
45
src/main.cc
|
@ -179,8 +179,8 @@ void usage()
|
|||
fprintf(stderr, " -r|--readfile <readfile> | read from given tcpdump file\n");
|
||||
fprintf(stderr, " -s|--rulefile <rulefile> | read rules from given file\n");
|
||||
fprintf(stderr, " -t|--tracefile <tracefile> | activate execution tracing\n");
|
||||
fprintf(stderr, " -w|--writefile <writefile> | write to given tcpdump file\n");
|
||||
fprintf(stderr, " -v|--version | print version and exit\n");
|
||||
fprintf(stderr, " -w|--writefile <writefile> | write to given tcpdump file\n");
|
||||
fprintf(stderr, " -x|--print-state <file.bst> | print contents of state file\n");
|
||||
fprintf(stderr, " -z|--analyze <analysis> | run the specified policy file analysis\n");
|
||||
#ifdef DEBUG
|
||||
|
@ -188,6 +188,8 @@ void usage()
|
|||
#endif
|
||||
fprintf(stderr, " -C|--no-checksums | ignore checksums\n");
|
||||
fprintf(stderr, " -F|--force-dns | force DNS\n");
|
||||
fprintf(stderr, " -G|--load-seeds <file> | load seeds from given file\n");
|
||||
fprintf(stderr, " -H|--save-seeds <file> | save seeds to given file\n");
|
||||
fprintf(stderr, " -I|--print-id <ID name> | print out given ID\n");
|
||||
fprintf(stderr, " -J|--set-seed <seed> | set the random number seed\n");
|
||||
fprintf(stderr, " -K|--md5-hashkey <hashkey> | set key for MD5-keyed hashing\n");
|
||||
|
@ -209,8 +211,6 @@ void usage()
|
|||
fprintf(stderr, " -X <file.bst> | print contents of state file as XML\n");
|
||||
#endif
|
||||
fprintf(stderr, " --pseudo-realtime[=<speedup>] | enable pseudo-realtime for performance evaluation (default 1)\n");
|
||||
fprintf(stderr, " --load-seeds <file> | load seeds from given file\n");
|
||||
fprintf(stderr, " --save-seeds <file> | save seeds to given file\n");
|
||||
|
||||
#ifdef USE_IDMEF
|
||||
fprintf(stderr, " -n|--idmef-dtd <idmef-msg.dtd> | specify path to IDMEF DTD file\n");
|
||||
|
@ -547,7 +547,7 @@ int main(int argc, char** argv)
|
|||
opterr = 0;
|
||||
|
||||
char opts[256];
|
||||
safe_strncpy(opts, "B:e:f:I:i:J:K:n:p:R:r:s:T:t:U:w:x:X:z:CFNPSWabdghvQ",
|
||||
safe_strncpy(opts, "B:e:f:G:H:I:i:J:K:n:p:R:r:s:T:t:U:w:x:X:z:CFNPQSWabdghv",
|
||||
sizeof(opts));
|
||||
|
||||
#ifdef USE_PERFTOOLS_DEBUG
|
||||
|
@ -582,6 +582,10 @@ int main(int argc, char** argv)
|
|||
dump_cfg = true;
|
||||
break;
|
||||
|
||||
case 'h':
|
||||
usage();
|
||||
break;
|
||||
|
||||
case 'i':
|
||||
interfaces.append(optarg);
|
||||
break;
|
||||
|
@ -603,10 +607,19 @@ int main(int argc, char** argv)
|
|||
g_trace_state.TraceOn();
|
||||
break;
|
||||
|
||||
case 'v':
|
||||
fprintf(stderr, "%s version %s\n", prog, bro_version());
|
||||
exit(0);
|
||||
break;
|
||||
|
||||
case 'w':
|
||||
writefile = optarg;
|
||||
break;
|
||||
|
||||
case 'x':
|
||||
bst_file = optarg;
|
||||
break;
|
||||
|
||||
case 'z':
|
||||
if ( streq(optarg, "notice") )
|
||||
do_notice_analysis = 1;
|
||||
|
@ -617,6 +630,10 @@ int main(int argc, char** argv)
|
|||
}
|
||||
break;
|
||||
|
||||
case 'B':
|
||||
debug_streams = optarg;
|
||||
break;
|
||||
|
||||
case 'C':
|
||||
override_ignore_checksums = 1;
|
||||
break;
|
||||
|
@ -688,13 +705,8 @@ int main(int argc, char** argv)
|
|||
do_watchdog = 1;
|
||||
break;
|
||||
|
||||
case 'h':
|
||||
usage();
|
||||
break;
|
||||
|
||||
case 'v':
|
||||
fprintf(stderr, "%s version %s\n", prog, bro_version());
|
||||
exit(0);
|
||||
case 'X':
|
||||
broxygen_config = optarg;
|
||||
break;
|
||||
|
||||
#ifdef USE_PERFTOOLS_DEBUG
|
||||
|
@ -707,9 +719,6 @@ int main(int argc, char** argv)
|
|||
break;
|
||||
#endif
|
||||
|
||||
case 'x':
|
||||
bst_file = optarg;
|
||||
break;
|
||||
#if 0 // broken
|
||||
case 'X':
|
||||
bst_file = optarg;
|
||||
|
@ -717,10 +726,6 @@ int main(int argc, char** argv)
|
|||
break;
|
||||
#endif
|
||||
|
||||
case 'X':
|
||||
broxygen_config = optarg;
|
||||
break;
|
||||
|
||||
#ifdef USE_IDMEF
|
||||
case 'n':
|
||||
fprintf(stderr, "Using IDMEF XML DTD from %s\n", optarg);
|
||||
|
@ -728,10 +733,6 @@ int main(int argc, char** argv)
|
|||
break;
|
||||
#endif
|
||||
|
||||
case 'B':
|
||||
debug_streams = optarg;
|
||||
break;
|
||||
|
||||
case 0:
|
||||
// This happens for long options that don't have
|
||||
// a short-option equivalent.
|
||||
|
|
|
@ -700,7 +700,7 @@ function split_n%(str: string, re: pattern,
|
|||
##
|
||||
## Returns: An array of strings where, if *incl_sep* is true, each two
|
||||
## successive elements correspond to a substring in *str* of the part
|
||||
## not matching *re* (event-indexed) and the part that matches *re*
|
||||
## not matching *re* (even-indexed) and the part that matches *re*
|
||||
## (odd-indexed).
|
||||
##
|
||||
## .. bro:see:: split_string split_string1 split_string_all str_split
|
||||
|
|
3
testing/btest/Baseline/broker.enable-and-exit/output
Normal file
3
testing/btest/Baseline/broker.enable-and-exit/output
Normal file
|
@ -0,0 +1,3 @@
|
|||
1
|
||||
2
|
||||
terminating
|
32
testing/btest/Baseline/core.reassembly/output
Normal file
32
testing/btest/Baseline/core.reassembly/output
Normal file
|
@ -0,0 +1,32 @@
|
|||
----------------------
|
||||
flow weird, excessively_small_fragment, 164.1.123.163, 164.1.123.61
|
||||
flow weird, fragment_size_inconsistency, 164.1.123.163, 164.1.123.61
|
||||
flow weird, fragment_inconsistency, 164.1.123.163, 164.1.123.61
|
||||
flow weird, fragment_inconsistency, 164.1.123.163, 164.1.123.61
|
||||
flow weird, dns_unmatched_msg, 164.1.123.163, 164.1.123.61
|
||||
----------------------
|
||||
flow weird, excessively_small_fragment, 164.1.123.163, 164.1.123.61
|
||||
flow weird, excessively_small_fragment, 164.1.123.163, 164.1.123.61
|
||||
flow weird, fragment_overlap, 164.1.123.163, 164.1.123.61
|
||||
----------------------
|
||||
flow weird, fragment_with_DF, 210.54.213.247, 131.243.1.10
|
||||
flow weird, fragment_with_DF, 210.54.213.247, 131.243.1.10
|
||||
flow weird, fragment_with_DF, 210.54.213.247, 131.243.1.10
|
||||
flow weird, fragment_with_DF, 210.54.213.247, 131.243.1.10
|
||||
flow weird, fragment_with_DF, 210.54.213.247, 131.243.1.10
|
||||
----------------------
|
||||
flow weird, excessively_small_fragment, 128.32.46.142, 10.0.0.1
|
||||
flow weird, excessively_small_fragment, 128.32.46.142, 10.0.0.1
|
||||
flow weird, fragment_inconsistency, 128.32.46.142, 10.0.0.1
|
||||
----------------------
|
||||
net_weird, truncated_IP
|
||||
net_weird, truncated_IP
|
||||
net_weird, truncated_IP
|
||||
net_weird, truncated_IP
|
||||
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfOOOOOOOOOOOOOOOOOOOOOOOOOOOO, nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfqkrodjdmrqfpiodgphidfliidlhd
|
||||
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], dgphrodofqhq, orgmmpelofil
|
||||
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], lenhfdqhqfgs, dfpqssidkpdg
|
||||
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfOOOOOOOOOOOOOOOOOOOOOOOOOOOO, nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfqkrodjdmrqfpiodgphidfliislrr
|
||||
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], iokgedlsdkjkiefgmeqkfjoh, ggdeolssksemrhedoledddml
|
||||
net_weird, truncated_IP
|
||||
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO HTTP/1.1\x0d\x0aHost: 127.0.0.1\x0d\x0aContent-Type: text/xml\x0d\x0aContent-length: 1\x0d\x0a\x0d\x0aO<?xml version="1.0"?>\x0d\x0a<g:searchrequest xmlns:g=, OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO HTTP/1.1\x0d\x0aHost: 127.0.0.1\x0d\x0aContent-Type: text/xml\x0d\x0aContent-length: 1\x0d\x0a\x0d\x0aO<?xml version="1.0"?igplqgeqsonkllfshdjplhjspmde
|
4
testing/btest/Baseline/core.tcp.quantum-insert/.stdout
Normal file
4
testing/btest/Baseline/core.tcp.quantum-insert/.stdout
Normal file
|
@ -0,0 +1,4 @@
|
|||
----- rexmit_inconsistency -----
|
||||
1429652006.683290 c: [orig_h=178.200.100.200, orig_p=39976/tcp, resp_h=96.126.98.124, resp_p=80/tcp]
|
||||
1429652006.683290 t1: HTTP/1.1 200 OK\x0d\x0aContent-Length: 5\x0d\x0a\x0d\x0aBANG!
|
||||
1429652006.683290 t2: HTTP/1.1 200 OK\x0d\x0aServer: nginx/1.4.4\x0d\x0aDate:
|
|
@ -0,0 +1,11 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path http
|
||||
#open 2015-05-12-16-26-53
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied orig_fuids orig_mime_types resp_fuids resp_mime_types
|
||||
#types time string addr port addr port count string string string string string count count count string count string string set[enum] string string set[string] vector[string] vector[string] vector[string] vector[string]
|
||||
1232039472.314927 CXWv6p3arKYeMETxOg 237.244.174.255 1905 79.218.110.244 80 1 GET ads1.msn.com /library/dap.js http://zone.msn.com/en/root/default.htm Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.0.3705; .NET CLR 1.1.4322; Media Center PC 4.0; .NET CLR 2.0.50727) 0 13249 200 OK - - - (empty) - - - - - FBcNS3RwceOxW15xg text/plain
|
||||
1232039472.446194 CXWv6p3arKYeMETxOg 237.244.174.255 1905 79.218.110.244 80 2 GET ads1.msn.com /library/dap.js http://zone.msn.com/en/root/default.htm Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.0.3705; .NET CLR 1.1.4322; Media Center PC 4.0; .NET CLR 2.0.50727) 0 13249 200 OK - - - (empty) - - - - - FDWU85N0DpedJPh93 text/plain
|
||||
#close 2015-05-12-16-26-53
|
|
@ -0,0 +1,18 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path conn
|
||||
#open 2015-06-19-21-05-46
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||
#types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string]
|
||||
1093521678.945447 CXWv6p3arKYeMETxOg 10.0.0.57 2387 10.0.0.3 502 tcp - 0.000493 0 0 SF - - 0 FafA 2 80 2 80 (empty)
|
||||
1093521953.490353 CCvvfg3TEfuqmmG4bh 10.0.0.57 2579 10.0.0.8 502 tcp modbus 23.256631 24 0 SF - - 0 ShADaFf 6 272 5 208 (empty)
|
||||
1093521681.696827 CjhGID4nQcgTWjvg4c 10.0.0.57 2578 10.0.0.3 502 tcp modbus 385.694948 112 138 S3 - - 0 ShADdf 20 920 12 626 (empty)
|
||||
1093522326.102435 CsRx2w45OKnoww6xl4 10.0.0.9 3082 10.0.0.3 502 tcp modbus 177.095534 72 69 SF - - 0 ShADdFaf 16 720 9 437 (empty)
|
||||
1093522946.554059 CRJuHdVW0XPVINV8a 10.0.0.57 2585 10.0.0.8 502 tcp - 76.561880 926 0 SF - - 0 ShADafF 8 1254 7 288 (empty)
|
||||
1093523065.562221 CPbrpk1qSsw6ESzHV4 10.0.0.8 502 10.0.0.57 4446 tcp - 155.114237 128 0 SF - - 0 ShADaFf 16 776 15 608 (empty)
|
||||
1153491879.610371 C6pKV8GSxOnSLghOa 192.168.66.235 2582 166.161.16.230 502 tcp - 2.905078 0 0 S0 - - 0 S 2 96 0 0 (empty)
|
||||
1153491888.530306 CIPOse170MGiRM1Qf4 192.168.66.235 2582 166.161.16.230 502 tcp modbus 85.560847 1692 1278 S1 - - 0 ShADad 167 8380 181 8522 (empty)
|
||||
1342774499.588269 C7XEbhP654jzLoe3a 10.1.1.234 51411 10.10.5.85 502 tcp modbus 2100.811351 237936 4121200 S2 - - 0 ShADdaF 39659 2300216 20100 5166412 (empty)
|
||||
#close 2015-06-19-21-05-51
|
|
@ -0,0 +1,12 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path files
|
||||
#open 2015-06-02-01-46-30
|
||||
#fields ts fuid tx_hosts rx_hosts conn_uids source depth analyzers mime_type filename duration local_orig is_orig seen_bytes total_bytes missing_bytes overflow_bytes timedout parent_fuid
|
||||
#types time string set[addr] set[addr] set[string] string count set[string] string string interval bool bool count count count count bool string
|
||||
1254722770.692743 Fel9gs4OtNEV6gUJZ5 10.10.1.4 74.53.140.153 CXWv6p3arKYeMETxOg SMTP 3 (empty) text/plain - 0.000000 - T 77 - 0 0 F -
|
||||
1254722770.692743 Ft4M3f2yMvLlmwtbq9 10.10.1.4 74.53.140.153 CXWv6p3arKYeMETxOg SMTP 4 (empty) text/html - 0.000061 - T 1868 - 0 0 F -
|
||||
1254722770.692804 FL9Y0d45OI4LpS6fmh 10.10.1.4 74.53.140.153 CXWv6p3arKYeMETxOg SMTP 5 (empty) text/plain NEWS.txt 1.165512 - T 10809 - 0 0 F -
|
||||
#close 2015-06-02-01-46-31
|
|
@ -0,0 +1,10 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path smtp
|
||||
#open 2015-06-02-01-46-30
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth helo mailfrom rcptto date from to reply_to msg_id in_reply_to subject x_originating_ip first_received second_received last_reply path user_agent tls fuids
|
||||
#types time string addr port addr port count string string set[string] string string set[string] string string string string addr string string string vector[addr] string bool vector[string]
|
||||
1254722768.219663 CXWv6p3arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 GP <gurpartap@patriots.in> <raj_deol2002in@yahoo.co.in> Mon, 5 Oct 2009 11:36:07 +0530 "Gurpartap Singh" <gurpartap@patriots.in> <raj_deol2002in@yahoo.co.in> - <000301ca4581$ef9e57f0$cedb07d0$@in> - SMTP - - - 250 OK id=1Mugho-0003Dg-Un 74.53.140.153,10.10.1.4 Microsoft Office Outlook 12.0 F Fel9gs4OtNEV6gUJZ5,Ft4M3f2yMvLlmwtbq9,FL9Y0d45OI4LpS6fmh
|
||||
#close 2015-06-02-01-46-31
|
11
testing/btest/Baseline/scripts.base.utils.urls/output
Normal file
11
testing/btest/Baseline/scripts.base.utils.urls/output
Normal file
|
@ -0,0 +1,11 @@
|
|||
[scheme=https, netlocation=www.example.com, portnum=<uninitialized>, path=/, file_name=<uninitialized>, file_base=<uninitialized>, file_ext=<uninitialized>, params=<uninitialized>]
|
||||
[scheme=http, netlocation=example.com, portnum=99, path=/test//, file_name=<uninitialized>, file_base=<uninitialized>, file_ext=<uninitialized>, params={
|
||||
[foo] = bar
|
||||
}]
|
||||
[scheme=ftp, netlocation=1.2.3.4, portnum=<uninitialized>, path=/pub/files/something.exe, file_name=something.exe, file_base=something, file_ext=exe, params=<uninitialized>]
|
||||
[scheme=http, netlocation=hyphen-example.com, portnum=<uninitialized>, path=/index.asp, file_name=index.asp, file_base=index, file_ext=asp, params={
|
||||
[q] = 123
|
||||
}]
|
||||
[scheme=<uninitialized>, netlocation=dfasjdfasdfasdf, portnum=<uninitialized>, path=/, file_name=<uninitialized>, file_base=<uninitialized>, file_ext=<uninitialized>, params={
|
||||
|
||||
}]
|
File diff suppressed because one or more lines are too long
BIN
testing/btest/Traces/http/missing-zlib-header.pcap
Normal file
BIN
testing/btest/Traces/http/missing-zlib-header.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/ipv4/fragmented-1.pcap
Normal file
BIN
testing/btest/Traces/ipv4/fragmented-1.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/ipv4/fragmented-2.pcap
Normal file
BIN
testing/btest/Traces/ipv4/fragmented-2.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/ipv4/fragmented-3.pcap
Normal file
BIN
testing/btest/Traces/ipv4/fragmented-3.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/ipv4/fragmented-4.pcap
Normal file
BIN
testing/btest/Traces/ipv4/fragmented-4.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/tcp/qi_internet_SYNACK_curl_jsonip.pcap
Normal file
BIN
testing/btest/Traces/tcp/qi_internet_SYNACK_curl_jsonip.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/tcp/reassembly.pcap
Normal file
BIN
testing/btest/Traces/tcp/reassembly.pcap
Normal file
Binary file not shown.
19
testing/btest/broker/enable-and-exit.bro
Normal file
19
testing/btest/broker/enable-and-exit.bro
Normal file
|
@ -0,0 +1,19 @@
|
|||
# @TEST-REQUIRES: grep -q ENABLE_BROKER $BUILD/CMakeCache.txt
|
||||
|
||||
# @TEST-EXEC: bro -b %INPUT >output
|
||||
# @TEST-EXEC: btest-diff output
|
||||
|
||||
redef exit_only_after_terminate = T;
|
||||
|
||||
event terminate_me() {
|
||||
print "terminating";
|
||||
terminate();
|
||||
}
|
||||
|
||||
event bro_init() {
|
||||
BrokerComm::enable();
|
||||
|
||||
print "1";
|
||||
schedule 1sec { terminate_me() };
|
||||
print "2";
|
||||
}
|
10
testing/btest/core/leaks/smtp_attachment.test
Normal file
10
testing/btest/core/leaks/smtp_attachment.test
Normal file
|
@ -0,0 +1,10 @@
|
|||
# Needs perftools support.
|
||||
#
|
||||
# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks
|
||||
#
|
||||
# @TEST-GROUP: leaks
|
||||
#
|
||||
# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local btest-bg-run bro bro -b -m -r $TRACES/smtp.trace %INPUT
|
||||
# @TEST-EXEC: btest-bg-wait 60
|
||||
|
||||
@load base/protocols/smtp
|
26
testing/btest/core/reassembly.bro
Normal file
26
testing/btest/core/reassembly.bro
Normal file
|
@ -0,0 +1,26 @@
|
|||
# @TEST-EXEC: bro -C -r $TRACES/ipv4/fragmented-1.pcap %INPUT >>output
|
||||
# @TEST-EXEC: bro -C -r $TRACES/ipv4/fragmented-2.pcap %INPUT >>output
|
||||
# @TEST-EXEC: bro -C -r $TRACES/ipv4/fragmented-3.pcap %INPUT >>output
|
||||
# @TEST-EXEC: bro -C -r $TRACES/ipv4/fragmented-4.pcap %INPUT >>output
|
||||
# @TEST-EXEC: bro -C -r $TRACES/tcp/reassembly.pcap %INPUT >>output
|
||||
# @TEST-EXEC: btest-diff output
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
print "----------------------";
|
||||
}
|
||||
|
||||
event flow_weird(name: string, src: addr, dst: addr)
|
||||
{
|
||||
print "flow weird", name, src, dst;
|
||||
}
|
||||
|
||||
event net_weird(name: string)
|
||||
{
|
||||
print "net_weird", name;
|
||||
}
|
||||
|
||||
event rexmit_inconsistency(c: connection, t1: string, t2: string)
|
||||
{
|
||||
print "rexmit_inconsistency", c$id, t1, t2 ;
|
||||
}
|
12
testing/btest/core/tcp/quantum-insert.bro
Normal file
12
testing/btest/core/tcp/quantum-insert.bro
Normal file
|
@ -0,0 +1,12 @@
|
|||
# @TEST-EXEC: bro -b -r $TRACES/tcp/qi_internet_SYNACK_curl_jsonip.pcap %INPUT
|
||||
# @TEST-EXEC: btest-diff .stdout
|
||||
|
||||
# Quantum Insert like attack, overlapping TCP packet with different content
|
||||
redef tcp_max_old_segments = 10;
|
||||
event rexmit_inconsistency(c: connection, t1: string, t2: string)
|
||||
{
|
||||
print "----- rexmit_inconsistency -----";
|
||||
print fmt("%.6f c: %s", network_time(), c$id);
|
||||
print fmt("%.6f t1: %s", network_time(), t1);
|
||||
print fmt("%.6f t2: %s", network_time(), t2);
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
# This tests an issue where some web servers don't
|
||||
# include an appropriate ZLIB header on deflated
|
||||
# content.
|
||||
#
|
||||
# @TEST-EXEC: bro -r $TRACES/http/missing-zlib-header.pcap %INPUT
|
||||
# @TEST-EXEC: btest-diff http.log
|
|
@ -5,6 +5,8 @@
|
|||
# @TEST-EXEC: cat ${DIST}/src/analyzer/protocol/modbus/events.bif | grep "^event modbus_" | wc -l >total
|
||||
# @TEST-EXEC: echo `cat covered` of `cat total` events triggered by trace >coverage
|
||||
# @TEST-EXEC: btest-diff coverage
|
||||
# @TEST-EXEC: btest-diff conn.log
|
||||
|
||||
|
||||
event modbus_message(c: connection, headers: ModbusHeaders, is_orig: bool)
|
||||
{
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
# @TEST-EXEC: bro -b -r $TRACES/smtp.trace %INPUT
|
||||
# @TEST-EXEC: btest-diff smtp.log
|
||||
# @TEST-EXEC: btest-diff files.log
|
||||
|
||||
@load base/protocols/smtp
|
19
testing/btest/scripts/base/utils/urls.test
Normal file
19
testing/btest/scripts/base/utils/urls.test
Normal file
|
@ -0,0 +1,19 @@
|
|||
# @TEST-EXEC: bro %INPUT >output
|
||||
# @TEST-EXEC: btest-diff output
|
||||
|
||||
# This is loaded by default.
|
||||
#@load base/utils/urls
|
||||
|
||||
print decompose_uri("https://www.example.com/");
|
||||
print decompose_uri("http://example.com:99/test//?foo=bar");
|
||||
print decompose_uri("ftp://1.2.3.4/pub/files/something.exe");
|
||||
print decompose_uri("http://hyphen-example.com/index.asp?q=123");
|
||||
|
||||
# This is mostly undefined behavior but it doesn't give any
|
||||
# reporter messages at least.
|
||||
print decompose_uri("dfasjdfasdfasdf?asd");
|
||||
|
||||
# These aren't supported yet.
|
||||
#print decompose_uri("mailto:foo@bar.com?subject=test!");
|
||||
#print decompose_uri("http://example.com/?test=ampersand&test");
|
||||
#print decompose_uri("http://user:password@example.com/");
|
2
testing/external/subdir-btest.cfg
vendored
2
testing/external/subdir-btest.cfg
vendored
|
@ -19,3 +19,5 @@ DIST=%(testbase)s/../../..
|
|||
BUILD=%(testbase)s/../../../build
|
||||
BRO_PROFILER_FILE=%(testbase)s/.tmp/script-coverage.XXXXXX
|
||||
BRO_DNS_FAKE=1
|
||||
# For fedora 21 - they disable MD5 for certificate verification and need setting an environment variable to permit it.
|
||||
OPENSSL_ENABLE_MD5_VERIFY=1
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue