Merge remote-tracking branch 'origin/master' into topic/seth/elasticsearch

Conflicts:
	aux/binpac
	aux/bro-aux
	aux/broccoli
	aux/broctl
	scripts/base/frameworks/logging/__load__.bro
	src/logging.bif
This commit is contained in:
Seth Hall 2012-07-06 12:01:16 -04:00
commit 601d1cf37e
217 changed files with 7146 additions and 2574 deletions

169
CHANGES
View file

@ -1,4 +1,173 @@
2.1-beta | 2012-07-06 07:36:29 -0700
* Remove a non-portable test case. (Daniel Thayer)
* Fix typos in input framework doc. (Daniel Thayer)
* Fix typos in DataSeries documentation. (Daniel Thayer)
* Bugfix making custom rotate functions work again. (Robin Sommer)
* Tiny bugfix for returning writer name. (Robin Sommer)
* Moving make target update-doc-sources from top-level Makefile to
btest Makefile. (Robin Sommer)
2.0-733 | 2012-07-02 15:31:24 -0700
* Extending the input reader DoInit() API. (Bernhard Amann). It now
provides a Info struct similar to what we introduced for log
writers, including a corresponding "config" key/value table.
* Fix to make writer-info work when debugging is enabled. (Bernhard
Amann)
2.0-726 | 2012-07-02 15:19:15 -0700
* Extending the log writer DoInit() API. (Robin Sommer)
We now pass in a Info struct that contains:
- the path name (as before)
- the rotation interval
- the log_rotate_base_time in seconds
- a table of key/value pairs with further configuration options.
To fill the table, log filters have a new field "config: table[string]
of strings". This gives a way to pass arbitrary values from
script-land to writers. Interpretation is left up to the writer.
* Split calc_next_rotate() into two functions, one of which is
thread-safe and can be used with the log_rotate_base_time value
from DoInit().
* Updates to the None writer. (Robin Sommer)
- It gets its own script writers/none.bro.
- New bool option LogNone::debug to enable debug output. It then
prints out all the values passed to DoInit().
- Fixed a bug that prevented Bro from terminating.
2.0-723 | 2012-07-02 15:02:56 -0700
* Extract ICMPv6 NDP options and include in ICMP events. This adds
a new parameter of type "icmp6_nd_options" to the ICMPv6 neighbor
discovery events. Addresses #833. (Jon Siwek)
* Set input frontend type before starting the thread. This means
that the thread type will be output correctly in the error
message. (Bernhard Amann)
2.0-719 | 2012-07-02 14:49:03 -0700
* Fix inconsistencies in random number generation. The
srand()/rand() interface was being intermixed with the
srandom()/random() one. The later is now used throughout. (Jon
Siwek)
* Changed the srand() and rand() BIFs to work deterministically if
Bro was given a seed file. Addresses #825. (Jon Siwek)
* Updating input framework unit tests to make them more reliable and
execute quicker. (Jon Siwek)
* Fixed race condition in writer and reader initializations. (Jon
Siwek)
* Small tweak to make test complete quicker. (Jon Siwek)
* Drain events before terminating log/thread managers. (Jon Siwek)
* Fix strict-aliasing warning in RemoteSerializer.cc. Addresses
#834. (Jon Siwek)
* Fix typos in event documentation. (Daniel Thayer)
* Fix typos in NEWS for Bro 2.1 beta. (Daniel Thayer)
2.0-709 | 2012-06-21 10:14:24 -0700
* Fix exceptions thrown in event handlers preventing others from running. (Jon Siwek)
* Add another SOCKS command. (Seth Hall)
* Fixed some problems with the SOCKS analyzer and tests. (Seth Hall)
* Updating NEWS in preparation for beta. (Robin Sommer)
* Accepting different AF_INET6 values for loopback link headers.
(Robin Sommer)
2.0-698 | 2012-06-20 14:30:40 -0700
* Updates for the SOCKS analyzer (Seth Hall).
- A SOCKS log!
- Now supports SOCKSv5 in the analyzer and the DPD sigs.
- Added protocol violations.
* Updates to the tunnels framework. (Seth Hall)
- Make the uid field optional since it's conceptually incorrect
for proxies being treated as tunnels to have it.
- Reordered two fields in the log.
- Reduced the default tunnel expiration interface to something
more reasonable (1 hour).
* Make Teredo bubble packet parsing more lenient. (Jon Siwek)
* Fix a crash in NetSessions::ParseIPPacket(). (Jon Siwek)
2.0-690 | 2012-06-18 16:01:33 -0700
* Support for decapsulating tunnels via the new tunnel framework in
base/frameworks/tunnels.
Bro currently supports Teredo, AYIYA, IP-in-IP (both IPv4 and
IPv6), and SOCKS. For all these, it logs the outher tunnel
connections in both conn.log and tunnel.log, and proceeds to
analyze the inner payload as if it were not tunneled, including
also logging it in conn.log (with a new tunnel_parents column
pointing back to the outer connection(s)). (Jon Siwek, Seth Hall,
Gregor Maier)
* The options "tunnel_port" and "parse_udp_tunnels" have been
removed. (Jon Siwek)
2.0-623 | 2012-06-15 16:24:52 -0700
* Changing an error in the input framework to a warning. (Robin
Sommer)
2.0-622 | 2012-06-15 15:38:43 -0700
* Input framework updates. (Bernhard Amann)
- Disable streaming reads from executed commands. This lead to
hanging Bros because pclose apparently can wait for eternity if
things go wrong.
- Automatically delete disabled input streams.
- Documentation.
2.0-614 | 2012-06-15 15:19:49 -0700
* Remove an old, unused diff canonifier. (Jon Siwek)
* Improve an error message in ICMP analyzer. (Jon Siwek)
* Fix a warning message when building docs. (Daniel Thayer)
* Fix many errors in the event documentation. (Daniel Thayer)
2.0-608 | 2012-06-11 15:59:00 -0700 2.0-608 | 2012-06-11 15:59:00 -0700
* Add more error handling code to logging of enum vals. Addresses * Add more error handling code to logging of enum vals. Addresses

View file

@ -41,9 +41,6 @@ broxygen: configured
broxygenclean: configured broxygenclean: configured
$(MAKE) -C $(BUILD) $@ $(MAKE) -C $(BUILD) $@
update-doc-sources:
./doc/scripts/genDocSourcesList.sh ./doc/scripts/DocSourcesList.cmake
dist: dist:
@rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz @rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz
@rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz @rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz

103
NEWS
View file

@ -3,13 +3,90 @@ Release Notes
============= =============
This document summarizes the most important changes in the current Bro This document summarizes the most important changes in the current Bro
release. For a complete list of changes, see the ``CHANGES`` file. release. For a complete list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with
their own CHANGES.)
Bro 2.1 Beta
------------
New Functionality
~~~~~~~~~~~~~~~~~
- Bro now comes with extensive IPv6 support. Past versions offered
only basic IPv6 functionality that was rarely used in practice as it
had to be enabled explicitly. IPv6 support is now fully integrated
into all parts of Bro including protocol analysis and the scripting
language. It's on by default and no longer requires any special
configuration.
Some of the most significant enhancements include support for IPv6
fragment reassembly, support for following IPv6 extension header
chains, and support for tunnel decapsulation (6to4 and Teredo). The
DNS analyzer now handles AAAA records properly, and DNS lookups that
Bro itself performs now include AAAA queries, so that, for example,
the result returned by script-level lookups is a set that can
contain both IPv4 and IPv6 addresses. Support for the most common
ICMPv6 message types has been added. Also, the FTP EPSV and EPRT
commands are now handled properly. Internally, the way IP addresses
are stored has been improved, so Bro can handle both IPv4
and IPv6 by default without any special configuration.
In addition to Bro itself, the other Bro components have also been
made IPv6-aware by default. In particular, significant changes were
made to trace-summary, PySubnetTree, and Broccoli to support IPv6.
- Bro now decapsulates tunnels via its new tunnel framework located in
scripts/base/frameworks/tunnels. It currently supports Teredo,
AYIYA, IP-in-IP (both IPv4 and IPv6), and SOCKS. For all these, it
logs the outer tunnel connections in both conn.log and tunnel.log,
and then proceeds to analyze the inner payload as if it were not
tunneled, including also logging that session in conn.log. For
SOCKS, it generates a new socks.log in addition with more
information.
- Bro now features a flexible input framework that allows users to
integrate external information in real-time into Bro while it's
processing network traffic. The most direct use-case at the moment
is reading data from ASCII files into Bro tables, with updates
picked up automatically when the file changes during runtime. See
doc/input.rst for more information.
Internally, the input framework is structured around the notion of
"reader plugins" that make it easy to interface to different data
sources. We will add more in the future.
- Bro's default ASCII log format is not exactly the most efficient way
for storing and searching large volumes of data. An an alternative,
Bro now comes with experimental support for DataSeries output, an
efficient binary format for recording structured bulk data.
DataSeries is developed and maintained at HP Labs. See
doc/logging-dataseries for more information.
- BroControl now has built-in support for host-based load-balancing
when using either PF_RING, Myricom cards, or individual interfaces.
Instead of adding a separate worker entry in node.cfg for each Bro
worker process on each worker host, it is now possible to just
specify the number of worker processes on each host and BroControl
configures everything correctly (including any neccessary enviroment
variables for the balancers).
This change adds three new keywords to the node.cfg file (to be used
with worker entries): lb_procs (specifies number of workers on a
host), lb_method (specifies what type of load balancing to use:
pf_ring, myricom, or interfaces), and lb_interfaces (used only with
"lb_method=interfaces" to specify which interfaces to load-balance
on).
Bro 2.1 Changed Functionality
------- ~~~~~~~~~~~~~~~~~~~~~
- Dependencies: The following summarizes the most important differences in existing
functionality. Note that this list is not complete, see CHANGES for
the full set.
- Changes in dependencies:
* Bro now requires CMake >= 2.6.3. * Bro now requires CMake >= 2.6.3.
@ -17,8 +94,7 @@ Bro 2.1
configure time. Doing so can significantly improve memory and configure time. Doing so can significantly improve memory and
CPU use. CPU use.
- Bro now supports IPv6 out of the box; the configure switch - The configure switch --enable-brov6 is gone.
--enable-brov6 is gone.
- DNS name lookups performed by Bro now also query AAAA records. The - DNS name lookups performed by Bro now also query AAAA records. The
results of the A and AAAA queries for a given hostname are combined results of the A and AAAA queries for a given hostname are combined
@ -35,12 +111,12 @@ Bro 2.1
- The syntax for IPv6 literals changed from "2607:f8b0:4009:802::1012" - The syntax for IPv6 literals changed from "2607:f8b0:4009:802::1012"
to "[2607:f8b0:4009:802::1012]". to "[2607:f8b0:4009:802::1012]".
- Bro now spawn threads for doing its logging. From a user's - Bro now spawns threads for doing its logging. From a user's
perspective not much should change, except that the OS may now show perspective not much should change, except that the OS may now show
a bunch of Bro threads. a bunch of Bro threads.
- We renamed the configure option --enable-perftools to - We renamed the configure option --enable-perftools to
--enable-perftool-debug to indicate that the switch is only relevant --enable-perftools-debug to indicate that the switch is only relevant
for debugging the heap. for debugging the heap.
- Bro's ICMP analyzer now handles both IPv4 and IPv6 messages with a - Bro's ICMP analyzer now handles both IPv4 and IPv6 messages with a
@ -50,8 +126,8 @@ Bro 2.1
- Log postprocessor scripts get an additional argument indicating the - Log postprocessor scripts get an additional argument indicating the
type of the log writer in use (e.g., "ascii"). type of the log writer in use (e.g., "ascii").
- BroControl's make-archive-name scripts also receives the writer - BroControl's make-archive-name script also receives the writer
type, but as it's 2nd(!) argument. If you're using a custom version type, but as its 2nd(!) argument. If you're using a custom version
of that script, you need to adapt it. See the shipped version for of that script, you need to adapt it. See the shipped version for
details. details.
@ -60,7 +136,10 @@ Bro 2.1
signature_files constant, this can be used to load signatures signature_files constant, this can be used to load signatures
relative to the current script (e.g., "@load-sigs ./foo.sig"). relative to the current script (e.g., "@load-sigs ./foo.sig").
TODO: Extend. - The options "tunnel_port" and "parse_udp_tunnels" have been removed.
Bro now supports decapsulating tunnels directly for protocols it
understands.
Bro 2.0 Bro 2.0
------- -------
@ -93,7 +172,7 @@ final release are:
ASCII logger now respects to add a suffix to the log files it ASCII logger now respects to add a suffix to the log files it
creates. creates.
* The ASCII logs now include further header information, and * The ASCII logs now include further header information, and
fields set to an empty value are now logged as ``(empty)`` by fields set to an empty value are now logged as ``(empty)`` by
default (instead of ``-``, which is already used for fields that default (instead of ``-``, which is already used for fields that
are not set at all). are not set at all).

View file

@ -1 +1 @@
2.0-608 2.1-beta

@ -1 +1 @@
Subproject commit b4094cb75e0a7769123f7db1f5d73f3f9f1c3977 Subproject commit 4ad8d15b6395925c9875c9d2912a6cc3b4918e0a

@ -1 +1 @@
Subproject commit 2038e3de042115c3caa706426e16c830c1fd1e9e Subproject commit c691c01e9cefae5a79bcd4b0f84ca387c8c587a7

@ -1 +1 @@
Subproject commit 4e17842743fef8df6abf0588c7ca86c6937a2b6d Subproject commit bd9d698f708908f7258211b534c91467d486983b

@ -1 +1 @@
Subproject commit 892b60edb967bb456872638f22ba994e84530137 Subproject commit 6bfc0bfae0406deddf207475582bf7a17f1787af

@ -1 +1 @@
Subproject commit 4697bf4c8046a3ab7d5e00e926c5db883cb44664 Subproject commit 44441a6c912c7c9f8d4771e042306ec5f44e461d

View file

@ -171,6 +171,10 @@
#ifndef HAVE_IPPROTO_IPV6 #ifndef HAVE_IPPROTO_IPV6
#define IPPROTO_IPV6 41 #define IPPROTO_IPV6 41
#endif #endif
#cmakedefine HAVE_IPPROTO_IPV4
#ifndef HAVE_IPPROTO_IPV4
#define IPPROTO_IPV4 4
#endif
#cmakedefine HAVE_IPPROTO_ROUTING #cmakedefine HAVE_IPPROTO_ROUTING
#ifndef HAVE_IPPROTO_ROUTING #ifndef HAVE_IPPROTO_ROUTING
#define IPPROTO_ROUTING 43 #define IPPROTO_ROUTING 43

View file

@ -1,183 +1,407 @@
===================== ==============================================
Loading Data into Bro Loading Data into Bro with the Input Framework
===================== ==============================================
.. rst-class:: opening .. rst-class:: opening
Bro comes with a flexible input interface that allows to read Bro now features a flexible input framework that allows users
previously stored data. Data is either read into bro tables or to import data into Bro. Data is either read into Bro tables or
sent to scripts using events. converted to events which can then be handled by scripts.
This document describes how the input framework can be used. This document gives an overview of how to use the input framework
with some examples. For more complex scenarios it is
worthwhile to take a look at the unit tests in
``testing/btest/scripts/base/frameworks/input/``.
.. contents:: .. contents::
Terminology Reading Data into Tables
=========== ========================
Bro's input framework is built around three main abstracts, that are Probably the most interesting use-case of the input framework is to
very similar to the abstracts used in the logging framework: read data into a Bro table.
Input Streams By default, the input framework reads the data in the same format
An input stream corresponds to a single input source as it is written by the logging framework in Bro - a tab-separated
(usually a textfile). It defined the information necessary ASCII file.
to find the source (e.g. the filename), the reader that it used
to get data from it (see below).
It also defines exactly what data is read from the input source.
There are two different kind of streams, event streams and table
streams.
By default, event streams generate an event for each line read
from the input source.
Table streams on the other hand read the input source in a bro
table for easy later access.
Readers We will show the ways to read files into Bro with a simple example.
A reader defines the input format for the specific input stream. For this example we assume that we want to import data from a blacklist
At the moment, Bro comes with two types of reader. The default reader is READER_ASCII, that contains server IP addresses as well as the timestamp and the reason
which can read the tab seperated ASCII logfiles that were generated by the for the block.
logging framework.
READER_RAW can files containing records separated by a character(like e.g. newline) and send
one event per line.
An example input file could look like this:
Event Streams ::
=============
For examples, please look at the unit tests in #fields ip timestamp reason
``testing/btest/scripts/base/frameworks/input/``. 192.168.17.1 1333252748 Malware host
192.168.27.2 1330235733 Botnet server
192.168.250.3 1333145108 Virus detected
Event Streams are streams that generate an event for each line in of the input source. To read a file into a Bro table, two record types have to be defined.
One contains the types and names of the columns that should constitute the
table keys and the second contains the types and names of the columns that
should constitute the table values.
For example, a simple stream retrieving the fields ``i`` and ``b`` from an inputSource In our case, we want to be able to lookup IPs. Hence, our key record
could be defined as follows: only contains the server IP. All other elements should be stored as
the table content.
The two records are defined as:
.. code:: bro .. code:: bro
type Val: record { type Idx: record {
i: int; ip: addr;
b: bool;
}; };
event line(description: Input::EventDescription, tpe: Input::Event, i: int, b: bool) { type Val: record {
# work with event data timestamp: time;
} reason: string;
};
event bro_init {
Input::add_event([$source="input.log", $name="input", $fields=Val, $ev=line]); Note that the names of the fields in the record definitions have to correspond
to the column names listed in the '#fields' line of the log file, in this
case 'ip', 'timestamp', and 'reason'.
The log file is read into the table with a simple call of the ``add_table``
function:
.. code:: bro
global blacklist: table[addr] of Val = table();
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]);
Input::remove("blacklist");
With these three lines we first create an empty table that should contain the
blacklist data and then instruct the input framework to open an input stream
named ``blacklist`` to read the data into the table. The third line removes the
input stream again, because we do not need it any more after the data has been
read.
Because some data files can - potentially - be rather big, the input framework
works asynchronously. A new thread is created for each new input stream.
This thread opens the input data file, converts the data into a Bro format and
sends it back to the main Bro thread.
Because of this, the data is not immediately accessible. Depending on the
size of the data source it might take from a few milliseconds up to a few
seconds until all data is present in the table. Please note that this means
that when Bro is running without an input source or on very short captured
files, it might terminate before the data is present in the system (because
Bro already handled all packets before the import thread finished).
Subsequent calls to an input source are queued until the previous action has
been completed. Because of this, it is, for example, possible to call
``add_table`` and ``remove`` in two subsequent lines: the ``remove`` action
will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires
the ``update_finished`` event. Once this event has been received all data
from the input file is available in the table.
.. code:: bro
event Input::update_finished(name: string, source: string) {
# now all data is in the table
print blacklist;
} }
The fields that can be set for an event stream are: The table can also already be used while the data is still being read - it
just might not contain all lines in the input file when the event has not
yet fired. After it has been populated it can be used like any other Bro
table and blacklist entries can easily be tested:
``want_record`` .. code:: bro
Boolean value, that defines if the event wants to receive the fields inside of
a single record value, or individually (default). if ( 192.168.18.12 in blacklist )
# take action
Re-reading and streaming data
-----------------------------
For many data sources, like for many blacklists, the source data is continually
changing. For these cases, the Bro input framework supports several ways to
deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When
an input stream is open, the function ``force_update`` can be called. This
will trigger a complete refresh of the table; any changed elements from the
file will be updated. After the update is finished the ``update_finished``
event will be raised.
In our example the call would look like:
.. code:: bro
Input::force_update("blacklist");
The input framework also supports two automatic refresh modes. The first mode
continually checks if a file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current
state. Each time a change has been detected and all the new data has been
read into the table, the ``update_finished`` event is raised.
The second mode is a streaming mode. This mode assumes that the source data
file is an append-only file to which new data is continually appended. Bro
continually checks for new data at the end of the file and will add the new
data to the table. If newer lines in the file have the same index as previous
lines, they will overwrite the values in the output table. Because of the
nature of streaming reads (data is continually added to the table),
the ``update_finished`` event is never raised when using streaming reads.
The reading mode can be selected by setting the ``mode`` option of the
add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``
and ``STREAM``.
Hence, when adding ``$mode=Input::REREAD`` to the previous example, the
blacklist table will always reflect the state of the blacklist input file.
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]);
Receiving change events
-----------------------
When re-reading files, it might be interesting to know exactly which lines in
the source files have changed.
For this reason, the input framework can raise an event each time when a data
item is added to, removed from or changed in a table.
The event definition looks like this:
.. code:: bro
event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) {
# act on values
}
The event has to be specified in ``$ev`` in the ``add_table`` call:
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]);
The ``description`` field of the event contains the arguments that were
originally supplied to the add_table call. Hence, the name of the stream can,
for example, be accessed with ``description$name``. ``tpe`` is an enum
containing the type of the change that occurred.
If a line that was not previously present in the table has been added,
then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains
the index of the added table entry and ``right`` contains the values of the
added entry.
If a table entry that already was present is altered during the re-reading or
streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In
this case ``left`` contains the index of the changed table entry and ``right``
contains the values of the entry before the change. The reason for this is
that the table already has been updated when the event is raised. The current
value in the table can be ascertained by looking up the current table value.
Hence it is possible to compare the new and the old values of the table.
If a table element is removed because it was no longer present during a
re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left``
contains the index and ``right`` the values of the removed element.
Filtering data during import
----------------------------
The input framework also allows a user to filter the data during the import.
To this end, predicate functions are used. A predicate function is called
before a new element is added/changed/removed from a table. The predicate
can either accept or veto the change by returning true for an accepted
change and false for a rejected change. Furthermore, it can alter the data
before it is written to the table.
The following example filter will reject to add entries to the table when
they were generated over a month ago. It will accept all changes and all
removals of values that are already present in the table.
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD,
$pred(typ: Input::Event, left: Idx, right: Val) = {
if ( typ != Input::EVENT_NEW ) {
return T;
}
return ( ( current_time() - right$timestamp ) < (30 day) );
}]);
To change elements while they are being imported, the predicate function can
manipulate ``left`` and ``right``. Note that predicate functions are called
before the change is committed to the table. Hence, when a table element is
changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right``
contain the new values, but the destination (``blacklist`` in our example)
still contains the old values. This allows predicate functions to examine
the changes between the old and the new version before deciding if they
should be allowed.
Different readers
-----------------
The input framework supports different kinds of readers for different kinds
of source data files. At the moment, the default reader reads ASCII files
formatted in the Bro log file format (tab-separated values). At the moment,
Bro comes with two other readers. The ``RAW`` reader reads a file that is
split by a specified record separator (usually newline). The contents are
returned line-by-line as strings; it can, for example, be used to read
configuration files and the like and is probably
only useful in the event mode and not for reading data to tables.
Another included reader is the ``BENCHMARK`` reader, which is being used
to optimize the speed of the input framework. It can generate arbitrary
amounts of semi-random data in all Bro data types supported by the input
framework.
In the future, the input framework will get support for new data sources
like, for example, different databases.
Add_table options
-----------------
This section lists all possible options that can be used for the add_table
function and gives a short explanation of their use. Most of the options
already have been discussed in the previous sections.
The possible fields that can be set for a table stream are:
``source`` ``source``
A mandatory string identifying the source of the data. A mandatory string identifying the source of the data.
For the ASCII reader this is the filename. For the ASCII reader this is the filename.
``name``
A mandatory name for the filter that can later be used
to manipulate it further.
``idx``
Record type that defines the index of the table.
``val``
Record type that defines the values of the table.
``reader`` ``reader``
The reader used for this stream. Default is ``READER_ASCII``. The reader used for this stream. Default is ``READER_ASCII``.
``mode`` ``mode``
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``. The mode in which the stream is opened. Possible values are
Default is ``MANUAL``. ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not ``MANUAL`` means that the file is not updated after it has
be reflected in the data bro knows. been read. Changes to the file will not be reflected in the
``REREAD`` means that the whole file is read again each time a change is found. This should be used for data Bro knows. ``REREAD`` means that the whole file is read
files that are mapped to a table where individual lines can change. again each time a change is found. This should be used for
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new files that are mapped to a table where individual lines can
data is added to the file. change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``destination``
The destination table.
``ev``
Optional event that is raised, when values are added to,
changed in, or deleted from the table. Events are passed an
Input::Event description as the first argument, the index
record as the second argument and the values as the third
argument.
``pred``
Optional predicate, that can prevent entries from being added
to the table and events from being sent.
``want_record``
Boolean value, that defines if the event wants to receive the
fields inside of a single record value, or individually
(default). This can be used if ``val`` is a record
containing only one type. In this case, if ``want_record`` is
set to false, the table will contain elements of the type
contained in ``val``.
Reading Data to Events
======================
The second supported mode of the input framework is reading data to Bro
events instead of reading them to a table using event streams.
Event streams work very similarly to table streams that were already
discussed in much detail. To read the blacklist of the previous example
into an event stream, the following Bro code could be used:
.. code:: bro
type Val: record {
ip: addr;
timestamp: time;
reason: string;
};
event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) {
# work with event data
}
event bro_init() {
Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]);
}
The main difference in the declaration of the event stream is, that an event
stream needs no separate index and value declarations -- instead, all source
data types are provided in a single record definition.
Apart from this, event streams work exactly the same as table streams and
support most of the options that are also supported for table streams.
The options that can be set when creating an event stream with
``add_event`` are:
``source``
A mandatory string identifying the source of the data.
For the ASCII reader this is the filename.
``name`` ``name``
A mandatory name for the stream that can later be used A mandatory name for the stream that can later be used
to remove it. to remove it.
``fields`` ``fields``
Name of a record type containing the fields, which should be retrieved from Name of a record type containing the fields, which should be
the input stream. retrieved from the input stream.
``ev`` ``ev``
The event which is fired, after a line has been read from the input source. The event which is fired, after a line has been read from the
The first argument that is passed to the event is an Input::Event structure, input source. The first argument that is passed to the event
followed by the data, either inside of a record (if ``want_record is set``) or as is an Input::Event structure, followed by the data, either
individual fields. inside of a record (if ``want_record is set``) or as
The Input::Event structure can contain information, if the received line is ``NEW``, has individual fields. The Input::Event structure can contain
been ``CHANGED`` or ``DELETED``. Singe the ascii reader cannot track this information information, if the received line is ``NEW``, has been
for event filters, the value is always ``NEW`` at the moment. ``CHANGED`` or ``DELETED``. Since the ASCII reader cannot
track this information for event filters, the value is
always ``NEW`` at the moment.
``mode``
The mode in which the stream is opened. Possible values are
Table Streams ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
============= ``MANUAL`` means that the file is not updated after it has
been read. Changes to the file will not be reflected in the
Table streams are the second, more complex type of input streams. data Bro knows. ``REREAD`` means that the whole file is read
again each time a change is found. This should be used for
Table streams store the information they read from an input source in a bro table. For example, files that are mapped to a table where individual lines can
when reading a file that contains ip addresses and connection attemt information one could use change. ``STREAM`` means that the data from the file is
an approach similar to this: streamed. Events / table entries will be generated as new
data is appended to the file.
.. code:: bro
type Idx: record {
a: addr;
};
type Val: record {
tries: count;
};
global conn_attempts: table[addr] of count = table();
event bro_init {
Input::add_table([$source="input.txt", $name="input", $idx=Idx, $val=Val, $destination=conn_attempts]);
}
The table conn_attempts will then contain the information about connection attemps.
The possible fields that can be set for an table stream are:
``want_record``
Boolean value, that defines if the event wants to receive the fields inside of
a single record value, or individually (default).
``source``
A mandatory string identifying the source of the data.
For the ASCII reader this is the filename.
``reader`` ``reader``
The reader used for this stream. Default is ``READER_ASCII``. The reader used for this stream. Default is ``READER_ASCII``.
``mode``
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``.
Default is ``MANUAL``.
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not
be reflected in the data bro knows.
``REREAD`` means that the whole file is read again each time a change is found. This should be used for
files that are mapped to a table where individual lines can change.
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new
data is added to the file.
``name``
A mandatory name for the filter that can later be used
to manipulate it further.
``idx``
Record type that defines the index of the table
``val``
Record type that defines the values of the table
``want_record`` ``want_record``
Defines if the values of the table should be stored as a record (default), Boolean value, that defines if the event wants to receive the
or as a simple value. Has to be set if Val contains more than one element. fields inside of a single record value, or individually
(default). If this is set to true, the event will receive a
single record of the type provided in ``fields``.
``destination``
The destination table
``ev``
Optional event that is raised, when values are added to, changed in or deleted from the table.
Events are passed an Input::Event description as the first argument, the index record as the second argument
and the values as the third argument.
``pred``
Optional predicate, that can prevent entries from being added to the table and events from being sent.

View file

@ -21,7 +21,7 @@ To use DataSeries, its libraries must be available at compile-time,
along with the supporting *Lintel* package. Generally, both are along with the supporting *Lintel* package. Generally, both are
distributed on `HP Labs' web site distributed on `HP Labs' web site
<http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need <http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need
to use recent developments versions for both packages, which you can to use recent development versions for both packages, which you can
download from github like this:: download from github like this::
git clone http://github.com/dataseries/Lintel git clone http://github.com/dataseries/Lintel
@ -76,7 +76,7 @@ tools, which its installation process installs into ``<prefix>/bin``.
For example, to convert a file back into an ASCII representation:: For example, to convert a file back into an ASCII representation::
$ ds2txt conn.log $ ds2txt conn.log
[... We skip a bunch of meta data here ...] [... We skip a bunch of metadata here ...]
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes
1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0 1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0
1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0 1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0
@ -86,13 +86,13 @@ For example, to convert a file back into an ASCII representation::
1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211 1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211
[...] [...]
(``--skip-all`` suppresses the meta data.) (``--skip-all`` suppresses the metadata.)
Note that the ASCII conversion is *not* equivalent to Bro's default Note that the ASCII conversion is *not* equivalent to Bro's default
output format. output format.
You can also switch only individual files over to DataSeries by adding You can also switch only individual files over to DataSeries by adding
code like this to your ``local.bro``:: code like this to your ``local.bro``:
.. code:: bro .. code:: bro
@ -109,7 +109,7 @@ Bro's DataSeries writer comes with a few tuning options, see
Working with DataSeries Working with DataSeries
======================= =======================
Here are few examples of using DataSeries command line tools to work Here are a few examples of using DataSeries command line tools to work
with the output files. with the output files.
* Printing CSV:: * Printing CSV::
@ -147,7 +147,7 @@ with the output files.
* Calculate some statistics: * Calculate some statistics:
Mean/stdev/min/max over a column:: Mean/stddev/min/max over a column::
$ dsstatgroupby '*' basic duration from conn.ds $ dsstatgroupby '*' basic duration from conn.ds
# Begin DSStatGroupByModule # Begin DSStatGroupByModule
@ -158,7 +158,7 @@ with the output files.
Quantiles of total connection volume:: Quantiles of total connection volume::
> dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds $ dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds
[...] [...]
2159 data points, mean 24616 +- 343295 [0,1.26615e+07] 2159 data points, mean 24616 +- 343295 [0,1.26615e+07]
quantiles about every 216 data points: quantiles about every 216 data points:
@ -166,7 +166,7 @@ with the output files.
tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262 tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262
[...] [...]
The ``man`` pages for these tool show further options, and their The ``man`` pages for these tools show further options, and their
``-h`` option gives some more information (either can be a bit cryptic ``-h`` option gives some more information (either can be a bit cryptic
unfortunately though). unfortunately though).
@ -175,7 +175,7 @@ Deficiencies
Due to limitations of the DataSeries format, one cannot inspect its Due to limitations of the DataSeries format, one cannot inspect its
files before they have been fully written. In other words, when using files before they have been fully written. In other words, when using
DataSeries, it's currently it's not possible to inspect the live log DataSeries, it's currently not possible to inspect the live log
files inside the spool directory before they are rotated to their files inside the spool directory before they are rotated to their
final location. It seems that this could be fixed with some effort, final location. It seems that this could be fixed with some effort,
and we will work with DataSeries development team on that if the and we will work with DataSeries development team on that if the

View file

@ -377,7 +377,7 @@ uncommon to need to delete that data before the end of the connection.
Other Writers Other Writers
------------- -------------
Bro support the following output formats other than ASCII: Bro supports the following output formats other than ASCII:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View file

@ -42,6 +42,7 @@ rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro)
rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro) rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro)
rest_target(${psd} base/frameworks/logging/writers/ascii.bro) rest_target(${psd} base/frameworks/logging/writers/ascii.bro)
rest_target(${psd} base/frameworks/logging/writers/dataseries.bro) rest_target(${psd} base/frameworks/logging/writers/dataseries.bro)
rest_target(${psd} base/frameworks/logging/writers/none.bro)
rest_target(${psd} base/frameworks/metrics/cluster.bro) rest_target(${psd} base/frameworks/metrics/cluster.bro)
rest_target(${psd} base/frameworks/metrics/main.bro) rest_target(${psd} base/frameworks/metrics/main.bro)
rest_target(${psd} base/frameworks/metrics/non-cluster.bro) rest_target(${psd} base/frameworks/metrics/non-cluster.bro)
@ -59,6 +60,7 @@ rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
rest_target(${psd} base/frameworks/reporter/main.bro) rest_target(${psd} base/frameworks/reporter/main.bro)
rest_target(${psd} base/frameworks/signatures/main.bro) rest_target(${psd} base/frameworks/signatures/main.bro)
rest_target(${psd} base/frameworks/software/main.bro) rest_target(${psd} base/frameworks/software/main.bro)
rest_target(${psd} base/frameworks/tunnels/main.bro)
rest_target(${psd} base/protocols/conn/contents.bro) rest_target(${psd} base/protocols/conn/contents.bro)
rest_target(${psd} base/protocols/conn/inactivity.bro) rest_target(${psd} base/protocols/conn/inactivity.bro)
rest_target(${psd} base/protocols/conn/main.bro) rest_target(${psd} base/protocols/conn/main.bro)
@ -77,6 +79,8 @@ rest_target(${psd} base/protocols/irc/main.bro)
rest_target(${psd} base/protocols/smtp/entities-excerpt.bro) rest_target(${psd} base/protocols/smtp/entities-excerpt.bro)
rest_target(${psd} base/protocols/smtp/entities.bro) rest_target(${psd} base/protocols/smtp/entities.bro)
rest_target(${psd} base/protocols/smtp/main.bro) rest_target(${psd} base/protocols/smtp/main.bro)
rest_target(${psd} base/protocols/socks/consts.bro)
rest_target(${psd} base/protocols/socks/main.bro)
rest_target(${psd} base/protocols/ssh/main.bro) rest_target(${psd} base/protocols/ssh/main.bro)
rest_target(${psd} base/protocols/ssl/consts.bro) rest_target(${psd} base/protocols/ssl/consts.bro)
rest_target(${psd} base/protocols/ssl/main.bro) rest_target(${psd} base/protocols/ssl/main.bro)

View file

@ -11,7 +11,8 @@ export {
## The communication logging stream identifier. ## The communication logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Which interface to listen on (``0.0.0.0`` or ``[::]`` are wildcards). ## Which interface to listen on. The addresses ``0.0.0.0`` and ``[::]``
## are wildcards.
const listen_interface = 0.0.0.0 &redef; const listen_interface = 0.0.0.0 &redef;
## Which port to listen on. ## Which port to listen on.

View file

@ -149,3 +149,64 @@ signature dpd_ssl_client {
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/ payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
tcp-state originator tcp-state originator
} }
signature dpd_ayiya {
ip-proto = udp
payload /^..\x11\x29/
enable "ayiya"
}
signature dpd_teredo {
ip-proto = udp
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
enable "teredo"
}
signature dpd_socks4_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state originator
}
signature dpd_socks4_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state responder
enable "socks"
}
signature dpd_socks4_reverse_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state responder
}
signature dpd_socks4_reverse_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_reverse_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state originator
enable "socks"
}
signature dpd_socks5_client {
ip-proto == tcp
# Watch for a few authentication methods to reduce false positives.
payload /^\x05.[\x00\x01\x02]/
tcp-state originator
}
signature dpd_socks5_server {
ip-proto == tcp
requires-reverse-signature dpd_socks5_client
# Watch for a single authentication method to be chosen by the server or
# the server to indicate the no authentication is required.
payload /^\x05(\x00|\x01[\x00\x01\x02])/
tcp-state responder
enable "socks"
}

View file

@ -53,6 +53,11 @@ export {
## really be executed. Parameters are the same as for the event. If true is ## really be executed. Parameters are the same as for the event. If true is
## returned, the update is performed. If false is returned, it is skipped. ## returned, the update is performed. If false is returned, it is skipped.
pred: function(typ: Input::Event, left: any, right: any): bool &optional; pred: function(typ: Input::Event, left: any, right: any): bool &optional;
## A key/value table that will be passed on the reader.
## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
}; };
## EventFilter description type used for the `event` method. ## EventFilter description type used for the `event` method.
@ -85,6 +90,10 @@ export {
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. ## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
ev: any; ev: any;
## A key/value table that will be passed on the reader.
## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
}; };
## Create a new table input from a given source. Returns true on success. ## Create a new table input from a given source. Returns true on success.

View file

@ -2,4 +2,4 @@
@load ./postprocessors @load ./postprocessors
@load ./writers/ascii @load ./writers/ascii
@load ./writers/dataseries @load ./writers/dataseries
@load ./writers/elasticsearch @load ./writers/elasticsearch@load ./writers/none

View file

@ -138,6 +138,11 @@ export {
## Callback function to trigger for rotated files. If not set, the ## Callback function to trigger for rotated files. If not set, the
## default comes out of :bro:id:`Log::default_rotation_postprocessors`. ## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
postprocessor: function(info: RotationInfo) : bool &optional; postprocessor: function(info: RotationInfo) : bool &optional;
## A key/value table that will be passed on to the writer.
## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
}; };
## Sentinel value for indicating that a filter was not found when looked up. ## Sentinel value for indicating that a filter was not found when looked up.
@ -327,6 +332,8 @@ function __default_rotation_postprocessor(info: RotationInfo) : bool
{ {
if ( info$writer in default_rotation_postprocessors ) if ( info$writer in default_rotation_postprocessors )
return default_rotation_postprocessors[info$writer](info); return default_rotation_postprocessors[info$writer](info);
return F;
} }
function default_path_func(id: ID, path: string, rec: any) : string function default_path_func(id: ID, path: string, rec: any) : string

View file

@ -0,0 +1,17 @@
##! Interface for the None log writer. Thiis writer is mainly for debugging.
module LogNone;
export {
## If true, output debugging output that can be useful for unit
## testing the logging framework.
const debug = F &redef;
}
function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool
{
return T;
}
redef Log::default_rotation_postprocessors += { [Log::WRITER_NONE] = default_rotation_postprocessor_func };

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,149 @@
##! This script handles the tracking/logging of tunnels (e.g. Teredo,
##! AYIYA, or IP-in-IP such as 6to4 where "IP" is either IPv4 or IPv6).
##!
##! For any connection that occurs over a tunnel, information about its
##! encapsulating tunnels is also found in the *tunnel* field of
##! :bro:type:`connection`.
module Tunnel;
export {
## The tunnel logging stream identifier.
redef enum Log::ID += { LOG };
## Types of interesting activity that can occur with a tunnel.
type Action: enum {
## A new tunnel (encapsulating "connection") has been seen.
DISCOVER,
## A tunnel connection has closed.
CLOSE,
## No new connections over a tunnel happened in the amount of
## time indicated by :bro:see:`Tunnel::expiration_interval`.
EXPIRE,
};
## The record type which contains column fields of the tunnel log.
type Info: record {
## Time at which some tunnel activity occurred.
ts: time &log;
## The unique identifier for the tunnel, which may correspond
## to a :bro:type:`connection`'s *uid* field for non-IP-in-IP tunnels.
## This is optional because there could be numerous connections
## for payload proxies like SOCKS but we should treat it as a single
## tunnel.
uid: string &log &optional;
## The tunnel "connection" 4-tuple of endpoint addresses/ports.
## For an IP tunnel, the ports will be 0.
id: conn_id &log;
## The type of tunnel.
tunnel_type: Tunnel::Type &log;
## The type of activity that occurred.
action: Action &log;
};
## Logs all tunnels in an encapsulation chain with action
## :bro:see:`Tunnel::DISCOVER` that aren't already in the
## :bro:id:`Tunnel::active` table and adds them if not.
global register_all: function(ecv: EncapsulatingConnVector);
## Logs a single tunnel "connection" with action
## :bro:see:`Tunnel::DISCOVER` if it's not already in the
## :bro:id:`Tunnel::active` table and adds it if not.
global register: function(ec: EncapsulatingConn);
## Logs a single tunnel "connection" with action
## :bro:see:`Tunnel::EXPIRE` and removes it from the
## :bro:id:`Tunnel::active` table.
##
## t: A table of tunnels.
##
## idx: The index of the tunnel table corresponding to the tunnel to expire.
##
## Returns: 0secs, which when this function is used as an
## :bro:attr:`&expire_func`, indicates to remove the element at
## *idx* immediately.
global expire: function(t: table[conn_id] of Info, idx: conn_id): interval;
## Removes a single tunnel from the :bro:id:`Tunnel::active` table
## and logs the closing/expiration of the tunnel.
##
## tunnel: The tunnel which has closed or expired.
##
## action: The specific reason for the tunnel ending.
global close: function(tunnel: Info, action: Action);
## The amount of time a tunnel is not used in establishment of new
## connections before it is considered inactive/expired.
const expiration_interval = 1hrs &redef;
## Currently active tunnels. That is, tunnels for which new, encapsulated
## connections have been seen in the interval indicated by
## :bro:see:`Tunnel::expiration_interval`.
global active: table[conn_id] of Info = table() &read_expire=expiration_interval &expire_func=expire;
}
const ayiya_ports = { 5072/udp };
redef dpd_config += { [ANALYZER_AYIYA] = [$ports = ayiya_ports] };
const teredo_ports = { 3544/udp };
redef dpd_config += { [ANALYZER_TEREDO] = [$ports = teredo_ports] };
redef likely_server_ports += { ayiya_ports, teredo_ports };
event bro_init() &priority=5
{
Log::create_stream(Tunnel::LOG, [$columns=Info]);
}
function register_all(ecv: EncapsulatingConnVector)
{
for ( i in ecv )
register(ecv[i]);
}
function register(ec: EncapsulatingConn)
{
if ( ec$cid !in active )
{
local tunnel: Info;
tunnel$ts = network_time();
if ( ec?$uid )
tunnel$uid = ec$uid;
tunnel$id = ec$cid;
tunnel$action = DISCOVER;
tunnel$tunnel_type = ec$tunnel_type;
active[ec$cid] = tunnel;
Log::write(LOG, tunnel);
}
}
function close(tunnel: Info, action: Action)
{
tunnel$action = action;
tunnel$ts = network_time();
Log::write(LOG, tunnel);
delete active[tunnel$id];
}
function expire(t: table[conn_id] of Info, idx: conn_id): interval
{
close(t[idx], EXPIRE);
return 0secs;
}
event new_connection(c: connection) &priority=5
{
if ( c?$tunnel )
register_all(c$tunnel);
}
event tunnel_changed(c: connection, e: EncapsulatingConnVector) &priority=5
{
register_all(e);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c$id in active )
close(active[c$id], CLOSE);
}

View file

@ -115,6 +115,61 @@ type icmp_context: record {
DF: bool; ##< True if the packets *don't fragment* flag is set. DF: bool; ##< True if the packets *don't fragment* flag is set.
}; };
## Values extracted from a Prefix Information option in an ICMPv6 neighbor
## discovery message as specified by :rfc:`4861`.
##
## .. bro:see:: icmp6_nd_option
type icmp6_nd_prefix_info: record {
## Number of leading bits of the *prefix* that are valid.
prefix_len: count;
## Flag indicating the prefix can be used for on-link determination.
L_flag: bool;
## Autonomous address-configuration flag.
A_flag: bool;
## Length of time in seconds that the prefix is valid for purpose of
## on-link determination (0xffffffff represents infinity).
valid_lifetime: interval;
## Length of time in seconds that the addresses generated from the prefix
## via stateless address autoconfiguration remain preferred
## (0xffffffff represents infinity).
preferred_lifetime: interval;
## An IP address or prefix of an IP address. Use the *prefix_len* field
## to convert this into a :bro:type:`subnet`.
prefix: addr;
};
## Options extracted from ICMPv6 neighbor discovery messages as specified
## by :rfc:`4861`.
##
## .. bro:see:: icmp_router_solicitation icmp_router_advertisement
## icmp_neighbor_advertisement icmp_neighbor_solicitation icmp_redirect
## icmp6_nd_options
type icmp6_nd_option: record {
## 8-bit identifier of the type of option.
otype: count;
## 8-bit integer representing the length of the option (including the type
## and length fields) in units of 8 octets.
len: count;
## Source Link-Layer Address (Type 1) or Target Link-Layer Address (Type 2).
## Byte ordering of this is dependent on the actual link-layer.
link_address: string &optional;
## Prefix Information (Type 3).
prefix: icmp6_nd_prefix_info &optional;
## Redirected header (Type 4). This field contains the context of the
## original, redirected packet.
redirect: icmp_context &optional;
## Recommended MTU for the link (Type 5).
mtu: count &optional;
## The raw data of the option (everything after type & length fields),
## useful for unknown option types or when the full option payload is
## truncated in the captured packet. In those cases, option fields
## won't be pre-extracted into the fields above.
payload: string &optional;
};
## A type alias for a vector of ICMPv6 neighbor discovery message options.
type icmp6_nd_options: vector of icmp6_nd_option;
# A DNS mapping between IP address and hostname resolved by Bro's internal # A DNS mapping between IP address and hostname resolved by Bro's internal
# resolver. # resolver.
# #
@ -178,6 +233,32 @@ type endpoint_stats: record {
## use ``count``. That should be changed. ## use ``count``. That should be changed.
type AnalyzerID: count; type AnalyzerID: count;
module Tunnel;
export {
## Records the identity of an encapsulating parent of a tunneled connection.
type EncapsulatingConn: record {
## The 4-tuple of the encapsulating "connection". In case of an IP-in-IP
## tunnel the ports will be set to 0. The direction (i.e., orig and
## resp) are set according to the first tunneled packet seen
## and not according to the side that established the tunnel.
cid: conn_id;
## The type of tunnel.
tunnel_type: Tunnel::Type;
## A globally unique identifier that, for non-IP-in-IP tunnels,
## cross-references the *uid* field of :bro:type:`connection`.
uid: string &optional;
} &log;
} # end export
module GLOBAL;
## A type alias for a vector of encapsulating "connections", i.e for when
## there are tunnels within tunnels.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type EncapsulatingConnVector: vector of Tunnel::EncapsulatingConn;
## Statistics about a :bro:type:`connection` endpoint. ## Statistics about a :bro:type:`connection` endpoint.
## ##
## .. bro:see:: connection ## .. bro:see:: connection
@ -199,10 +280,10 @@ type endpoint: record {
flow_label: count; flow_label: count;
}; };
# A connection. This is Bro's basic connection type describing IP- and ## A connection. This is Bro's basic connection type describing IP- and
# transport-layer information about the conversation. Note that Bro uses a ## transport-layer information about the conversation. Note that Bro uses a
# liberal interpreation of "connection" and associates instances of this type ## liberal interpreation of "connection" and associates instances of this type
# also with UDP and ICMP flows. ## also with UDP and ICMP flows.
type connection: record { type connection: record {
id: conn_id; ##< The connection's identifying 4-tuple. id: conn_id; ##< The connection's identifying 4-tuple.
orig: endpoint; ##< Statistics about originator side. orig: endpoint; ##< Statistics about originator side.
@ -227,6 +308,12 @@ type connection: record {
## that is very likely unique across independent Bro runs. These IDs can thus be ## that is very likely unique across independent Bro runs. These IDs can thus be
## used to tag and locate information associated with that connection. ## used to tag and locate information associated with that connection.
uid: string; uid: string;
## If the connection is tunneled, this field contains information about
## the encapsulating "connection(s)" with the outermost one starting
## at index zero. It's also always the first such enapsulation seen
## for the connection unless the :bro:id:`tunnel_changed` event is handled
## and re-assigns this field to the new encapsulation.
tunnel: EncapsulatingConnVector &optional;
}; };
## Fields of a SYN packet. ## Fields of a SYN packet.
@ -884,18 +971,9 @@ const frag_timeout = 0.0 sec &redef;
const packet_sort_window = 0 usecs &redef; const packet_sort_window = 0 usecs &redef;
## If positive, indicates the encapsulation header size that should ## If positive, indicates the encapsulation header size that should
## be skipped. This either applies to all packets, or if ## be skipped. This applies to all packets.
## :bro:see:`tunnel_port` is set, only to packets on that port.
##
## .. :bro:see:: tunnel_port
const encap_hdr_size = 0 &redef; const encap_hdr_size = 0 &redef;
## A UDP port that specifies which connections to apply :bro:see:`encap_hdr_size`
## to.
##
## .. :bro:see:: encap_hdr_size
const tunnel_port = 0/udp &redef;
## Whether to use the ``ConnSize`` analyzer to count the number of packets and ## Whether to use the ``ConnSize`` analyzer to count the number of packets and
## IP-level bytes transfered by each endpoint. If true, these values are returned ## IP-level bytes transfered by each endpoint. If true, these values are returned
## in the connection's :bro:see:`endpoint` record value. ## in the connection's :bro:see:`endpoint` record value.
@ -1250,7 +1328,7 @@ type ip6_ext_hdr: record {
mobility: ip6_mobility_hdr &optional; mobility: ip6_mobility_hdr &optional;
}; };
## A type alias for a vector of IPv6 extension headers ## A type alias for a vector of IPv6 extension headers.
type ip6_ext_hdr_chain: vector of ip6_ext_hdr; type ip6_ext_hdr_chain: vector of ip6_ext_hdr;
## Values extracted from an IPv6 header. ## Values extracted from an IPv6 header.
@ -1336,6 +1414,42 @@ type pkt_hdr: record {
icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet. icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet.
}; };
## A Teredo origin indication header. See :rfc:`4380` for more information
## about the Teredo protocol.
##
## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication
## teredo_hdr
type teredo_auth: record {
id: string; ##< Teredo client identifier.
value: string; ##< HMAC-SHA1 over shared secret key between client and
##< server, nonce, confirmation byte, origin indication
##< (if present), and the IPv6 packet.
nonce: count; ##< Nonce chosen by Teredo client to be repeated by
##< Teredo server.
confirm: count; ##< Confirmation byte to be set to 0 by Teredo client
##< and non-zero by server if client needs new key.
};
## A Teredo authentication header. See :rfc:`4380` for more information
## about the Teredo protocol.
##
## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication
## teredo_hdr
type teredo_origin: record {
p: port; ##< Unobfuscated UDP port of Teredo client.
a: addr; ##< Unobfuscated IPv4 address of Teredo client.
};
## A Teredo packet header. See :rfc:`4380` for more information about the
## Teredo protocol.
##
## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication
type teredo_hdr: record {
auth: teredo_auth &optional; ##< Teredo authentication header.
origin: teredo_origin &optional; ##< Teredo origin indication header.
hdr: pkt_hdr; ##< IPv6 and transport protocol headers.
};
## Definition of "secondary filters". A secondary filter is a BPF filter given as ## Definition of "secondary filters". A secondary filter is a BPF filter given as
## index in this table. For each such filter, the corresponding event is raised for ## index in this table. For each such filter, the corresponding event is raised for
## all matching packets. ## all matching packets.
@ -2343,6 +2457,17 @@ type bittorrent_benc_dir: table[string] of bittorrent_benc_value;
## bt_tracker_response_not_ok ## bt_tracker_response_not_ok
type bt_tracker_headers: table[string] of string; type bt_tracker_headers: table[string] of string;
module SOCKS;
export {
## This record is for a SOCKS client or server to provide either a
## name or an address to represent a desired or established connection.
type Address: record {
host: addr &optional;
name: string &optional;
} &log;
}
module GLOBAL;
@load base/event.bif @load base/event.bif
## BPF filter the user has set via the -f command line options. Empty if none. ## BPF filter the user has set via the -f command line options. Empty if none.
@ -2636,11 +2761,33 @@ const record_all_packets = F &redef;
## .. bro:see:: conn_stats ## .. bro:see:: conn_stats
const ignore_keep_alive_rexmit = F &redef; const ignore_keep_alive_rexmit = F &redef;
## Whether the analysis engine parses IP packets encapsulated in module Tunnel;
## UDP tunnels. export {
## ## The maximum depth of a tunnel to decapsulate until giving up.
## .. bro:see:: tunnel_port ## Setting this to zero will disable all types of tunnel decapsulation.
const parse_udp_tunnels = F &redef; const max_depth: count = 2 &redef;
## Toggle whether to do IPv{4,6}-in-IPv{4,6} decapsulation.
const enable_ip = T &redef;
## Toggle whether to do IPv{4,6}-in-AYIYA decapsulation.
const enable_ayiya = T &redef;
## Toggle whether to do IPv6-in-Teredo decapsulation.
const enable_teredo = T &redef;
## With this option set, the Teredo analysis will first check to see if
## other protocol analyzers have confirmed that they think they're
## parsing the right protocol and only continue with Teredo tunnel
## decapsulation if nothing else has yet confirmed. This can help
## reduce false positives of UDP traffic (e.g. DNS) that also happens
## to have a valid Teredo encapsulation.
const yielding_teredo_decapsulation = T &redef;
## How often to cleanup internal state for inactive IP tunnels.
const ip_tunnel_timeout = 24hrs &redef;
} # end export
module GLOBAL;
## Number of bytes per packet to capture from live interfaces. ## Number of bytes per packet to capture from live interfaces.
const snaplen = 8192 &redef; const snaplen = 8192 &redef;

View file

@ -29,6 +29,7 @@
@load base/frameworks/metrics @load base/frameworks/metrics
@load base/frameworks/intel @load base/frameworks/intel
@load base/frameworks/reporter @load base/frameworks/reporter
@load base/frameworks/tunnels
@load base/protocols/conn @load base/protocols/conn
@load base/protocols/dns @load base/protocols/dns
@ -36,6 +37,7 @@
@load base/protocols/http @load base/protocols/http
@load base/protocols/irc @load base/protocols/irc
@load base/protocols/smtp @load base/protocols/smtp
@load base/protocols/socks
@load base/protocols/ssh @load base/protocols/ssh
@load base/protocols/ssl @load base/protocols/ssl
@load base/protocols/syslog @load base/protocols/syslog

View file

@ -101,6 +101,10 @@ export {
resp_pkts: count &log &optional; resp_pkts: count &log &optional;
## Number IP level bytes the responder sent. See ``orig_pkts``. ## Number IP level bytes the responder sent. See ``orig_pkts``.
resp_ip_bytes: count &log &optional; resp_ip_bytes: count &log &optional;
## If this connection was over a tunnel, indicate the
## *uid* values for any encapsulating parent connections
## used over the lifetime of this inner connection.
tunnel_parents: set[string] &log;
}; };
## Event that can be handled to access the :bro:type:`Conn::Info` ## Event that can be handled to access the :bro:type:`Conn::Info`
@ -190,6 +194,8 @@ function set_conn(c: connection, eoc: bool)
c$conn$ts=c$start_time; c$conn$ts=c$start_time;
c$conn$uid=c$uid; c$conn$uid=c$uid;
c$conn$id=c$id; c$conn$id=c$id;
if ( c?$tunnel && |c$tunnel| > 0 )
add c$conn$tunnel_parents[c$tunnel[|c$tunnel|-1]$uid];
c$conn$proto=get_port_transport_proto(c$id$resp_p); c$conn$proto=get_port_transport_proto(c$id$resp_p);
if( |Site::local_nets| > 0 ) if( |Site::local_nets| > 0 )
c$conn$local_orig=Site::is_local_addr(c$id$orig_h); c$conn$local_orig=Site::is_local_addr(c$id$orig_h);
@ -227,6 +233,14 @@ event content_gap(c: connection, is_orig: bool, seq: count, length: count) &prio
c$conn$missed_bytes = c$conn$missed_bytes + length; c$conn$missed_bytes = c$conn$missed_bytes + length;
} }
event tunnel_changed(c: connection, e: EncapsulatingConnVector) &priority=5
{
set_conn(c, F);
if ( |e| > 0 )
add c$conn$tunnel_parents[e[|e|-1]$uid];
c$tunnel = e;
}
event connection_state_remove(c: connection) &priority=5 event connection_state_remove(c: connection) &priority=5
{ {

View file

@ -0,0 +1,2 @@
@load ./consts
@load ./main

View file

@ -0,0 +1,40 @@
module SOCKS;
export {
type RequestType: enum {
CONNECTION = 1,
PORT = 2,
UDP_ASSOCIATE = 3,
};
const v5_authentication_methods: table[count] of string = {
[0] = "No Authentication Required",
[1] = "GSSAPI",
[2] = "Username/Password",
[3] = "Challenge-Handshake Authentication Protocol",
[5] = "Challenge-Response Authentication Method",
[6] = "Secure Sockets Layer",
[7] = "NDS Authentication",
[8] = "Multi-Authentication Framework",
[255] = "No Acceptable Methods",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const v4_status: table[count] of string = {
[0x5a] = "succeeded",
[0x5b] = "general SOCKS server failure",
[0x5c] = "request failed because client is not running identd",
[0x5d] = "request failed because client's identd could not confirm the user ID string in the request",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const v5_status: table[count] of string = {
[0] = "succeeded",
[1] = "general SOCKS server failure",
[2] = "connection not allowed by ruleset",
[3] = "Network unreachable",
[4] = "Host unreachable",
[5] = "Connection refused",
[6] = "TTL expired",
[7] = "Command not supported",
[8] = "Address type not supported",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
}

View file

@ -0,0 +1,87 @@
@load base/frameworks/tunnels
@load ./consts
module SOCKS;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Time when the proxy connection was first detected.
ts: time &log;
uid: string &log;
id: conn_id &log;
## Protocol version of SOCKS.
version: count &log;
## Username for the proxy if extracted from the network.
user: string &log &optional;
## Server status for the attempt at using the proxy.
status: string &log &optional;
## Client requested SOCKS address. Could be an address, a name or both.
request: SOCKS::Address &log &optional;
## Client requested port.
request_p: port &log &optional;
## Server bound address. Could be an address, a name or both.
bound: SOCKS::Address &log &optional;
## Server bound port.
bound_p: port &log &optional;
};
## Event that can be handled to access the SOCKS
## record as it is sent on to the logging framework.
global log_socks: event(rec: Info);
}
event bro_init() &priority=5
{
Log::create_stream(SOCKS::LOG, [$columns=Info, $ev=log_socks]);
}
redef record connection += {
socks: SOCKS::Info &optional;
};
# Configure DPD
redef capture_filters += { ["socks"] = "tcp port 1080" };
redef dpd_config += { [ANALYZER_SOCKS] = [$ports = set(1080/tcp)] };
redef likely_server_ports += { 1080/tcp };
function set_session(c: connection, version: count)
{
if ( ! c?$socks )
c$socks = [$ts=network_time(), $id=c$id, $uid=c$uid, $version=version];
}
event socks_request(c: connection, version: count, request_type: count,
sa: SOCKS::Address, p: port, user: string) &priority=5
{
set_session(c, version);
c$socks$request = sa;
c$socks$request_p = p;
# Copy this conn_id and set the orig_p to zero because in the case of SOCKS proxies there will
# be potentially many source ports since a new proxy connection is established for each
# proxied connection. We treat this as a singular "tunnel".
local cid = copy(c$id);
cid$orig_p = 0/tcp;
Tunnel::register([$cid=cid, $tunnel_type=Tunnel::SOCKS, $payload_proxy=T]);
}
event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=5
{
set_session(c, version);
if ( version == 5 )
c$socks$status = v5_status[reply];
else if ( version == 4 )
c$socks$status = v4_status[reply];
c$socks$bound = sa;
c$socks$bound_p = p;
}
event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=-5
{
Log::write(SOCKS::LOG, c$socks);
}

24
src/AYIYA.cc Normal file
View file

@ -0,0 +1,24 @@
#include "AYIYA.h"
AYIYA_Analyzer::AYIYA_Analyzer(Connection* conn)
: Analyzer(AnalyzerTag::AYIYA, conn)
{
interp = new binpac::AYIYA::AYIYA_Conn(this);
}
AYIYA_Analyzer::~AYIYA_Analyzer()
{
delete interp;
}
void AYIYA_Analyzer::Done()
{
Analyzer::Done();
Event(udp_session_done);
}
void AYIYA_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, int seq, const IP_Hdr* ip, int caplen)
{
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
interp->NewData(orig, data, data + len);
}

29
src/AYIYA.h Normal file
View file

@ -0,0 +1,29 @@
#ifndef AYIYA_h
#define AYIYA_h
#include "ayiya_pac.h"
class AYIYA_Analyzer : public Analyzer {
public:
AYIYA_Analyzer(Connection* conn);
virtual ~AYIYA_Analyzer();
virtual void Done();
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new AYIYA_Analyzer(conn); }
static bool Available()
{ return BifConst::Tunnel::enable_ayiya &&
BifConst::Tunnel::max_depth > 0; }
protected:
friend class AnalyzerTimer;
void ExpireTimer(double t);
binpac::AYIYA::AYIYA_Conn* interp;
};
#endif

View file

@ -4,6 +4,7 @@
#include "PIA.h" #include "PIA.h"
#include "Event.h" #include "Event.h"
#include "AYIYA.h"
#include "BackDoor.h" #include "BackDoor.h"
#include "BitTorrent.h" #include "BitTorrent.h"
#include "BitTorrentTracker.h" #include "BitTorrentTracker.h"
@ -33,9 +34,11 @@
#include "NFS.h" #include "NFS.h"
#include "Portmap.h" #include "Portmap.h"
#include "POP3.h" #include "POP3.h"
#include "SOCKS.h"
#include "SSH.h" #include "SSH.h"
#include "SSL.h" #include "SSL.h"
#include "Syslog-binpac.h" #include "Syslog-binpac.h"
#include "Teredo.h"
#include "ConnSizeAnalyzer.h" #include "ConnSizeAnalyzer.h"
// Keep same order here as in AnalyzerTag definition! // Keep same order here as in AnalyzerTag definition!
@ -127,6 +130,16 @@ const Analyzer::Config Analyzer::analyzer_configs[] = {
Syslog_Analyzer_binpac::InstantiateAnalyzer, Syslog_Analyzer_binpac::InstantiateAnalyzer,
Syslog_Analyzer_binpac::Available, 0, false }, Syslog_Analyzer_binpac::Available, 0, false },
{ AnalyzerTag::AYIYA, "AYIYA",
AYIYA_Analyzer::InstantiateAnalyzer,
AYIYA_Analyzer::Available, 0, false },
{ AnalyzerTag::SOCKS, "SOCKS",
SOCKS_Analyzer::InstantiateAnalyzer,
SOCKS_Analyzer::Available, 0, false },
{ AnalyzerTag::Teredo, "TEREDO",
Teredo_Analyzer::InstantiateAnalyzer,
Teredo_Analyzer::Available, 0, false },
{ AnalyzerTag::File, "FILE", File_Analyzer::InstantiateAnalyzer, { AnalyzerTag::File, "FILE", File_Analyzer::InstantiateAnalyzer,
File_Analyzer::Available, 0, false }, File_Analyzer::Available, 0, false },
{ AnalyzerTag::Backdoor, "BACKDOOR", { AnalyzerTag::Backdoor, "BACKDOOR",

View file

@ -215,6 +215,11 @@ public:
// analyzer, even if the method is called multiple times. // analyzer, even if the method is called multiple times.
virtual void ProtocolConfirmation(); virtual void ProtocolConfirmation();
// Return whether the analyzer previously called ProtocolConfirmation()
// at least once before.
bool ProtocolConfirmed() const
{ return protocol_confirmed; }
// Report that we found a significant protocol violation which might // Report that we found a significant protocol violation which might
// indicate that the analyzed data is in fact not the expected // indicate that the analyzed data is in fact not the expected
// protocol. The protocol_violation event is raised once per call to // protocol. The protocol_violation event is raised once per call to
@ -338,6 +343,10 @@ private:
for ( analyzer_list::iterator var = the_kids.begin(); \ for ( analyzer_list::iterator var = the_kids.begin(); \
var != the_kids.end(); var++ ) var != the_kids.end(); var++ )
#define LOOP_OVER_GIVEN_CONST_CHILDREN(var, the_kids) \
for ( analyzer_list::const_iterator var = the_kids.begin(); \
var != the_kids.end(); var++ )
class SupportAnalyzer : public Analyzer { class SupportAnalyzer : public Analyzer {
public: public:
SupportAnalyzer(AnalyzerTag::Tag tag, Connection* conn, bool arg_orig) SupportAnalyzer(AnalyzerTag::Tag tag, Connection* conn, bool arg_orig)

View file

@ -33,11 +33,15 @@ namespace AnalyzerTag {
DHCP_BINPAC, DNS_TCP_BINPAC, DNS_UDP_BINPAC, DHCP_BINPAC, DNS_TCP_BINPAC, DNS_UDP_BINPAC,
HTTP_BINPAC, SSL, SYSLOG_BINPAC, HTTP_BINPAC, SSL, SYSLOG_BINPAC,
// Decapsulation analyzers.
AYIYA,
SOCKS,
Teredo,
// Other // Other
File, Backdoor, InterConn, SteppingStone, TCPStats, File, Backdoor, InterConn, SteppingStone, TCPStats,
ConnSize, ConnSize,
// Support-analyzers // Support-analyzers
Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP, Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP,
Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh, Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh,

View file

@ -187,6 +187,9 @@ endmacro(BINPAC_TARGET)
binpac_target(binpac-lib.pac) binpac_target(binpac-lib.pac)
binpac_target(binpac_bro-lib.pac) binpac_target(binpac_bro-lib.pac)
binpac_target(ayiya.pac
ayiya-protocol.pac ayiya-analyzer.pac)
binpac_target(bittorrent.pac binpac_target(bittorrent.pac
bittorrent-protocol.pac bittorrent-analyzer.pac) bittorrent-protocol.pac bittorrent-analyzer.pac)
binpac_target(dce_rpc.pac binpac_target(dce_rpc.pac
@ -206,6 +209,8 @@ binpac_target(netflow.pac
netflow-protocol.pac netflow-analyzer.pac) netflow-protocol.pac netflow-analyzer.pac)
binpac_target(smb.pac binpac_target(smb.pac
smb-protocol.pac smb-pipe.pac smb-mailslot.pac) smb-protocol.pac smb-pipe.pac smb-mailslot.pac)
binpac_target(socks.pac
socks-protocol.pac socks-analyzer.pac)
binpac_target(ssl.pac binpac_target(ssl.pac
ssl-defs.pac ssl-protocol.pac ssl-analyzer.pac) ssl-defs.pac ssl-protocol.pac ssl-analyzer.pac)
binpac_target(syslog.pac binpac_target(syslog.pac
@ -277,6 +282,7 @@ set(bro_SRCS
Anon.cc Anon.cc
ARP.cc ARP.cc
Attr.cc Attr.cc
AYIYA.cc
BackDoor.cc BackDoor.cc
Base64.cc Base64.cc
BitTorrent.cc BitTorrent.cc
@ -375,6 +381,7 @@ set(bro_SRCS
SmithWaterman.cc SmithWaterman.cc
SMB.cc SMB.cc
SMTP.cc SMTP.cc
SOCKS.cc
SSH.cc SSH.cc
SSL.cc SSL.cc
Scope.cc Scope.cc
@ -391,9 +398,11 @@ set(bro_SRCS
TCP_Endpoint.cc TCP_Endpoint.cc
TCP_Reassembler.cc TCP_Reassembler.cc
Telnet.cc Telnet.cc
Teredo.cc
Timer.cc Timer.cc
Traverse.cc Traverse.cc
Trigger.cc Trigger.cc
TunnelEncapsulation.cc
Type.cc Type.cc
UDP.cc UDP.cc
Val.cc Val.cc

View file

@ -13,6 +13,7 @@
#include "Timer.h" #include "Timer.h"
#include "PIA.h" #include "PIA.h"
#include "binpac.h" #include "binpac.h"
#include "TunnelEncapsulation.h"
void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer, void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer,
int arg_do_expire) int arg_do_expire)
@ -112,7 +113,7 @@ unsigned int Connection::external_connections = 0;
IMPLEMENT_SERIAL(Connection, SER_CONNECTION); IMPLEMENT_SERIAL(Connection, SER_CONNECTION);
Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id, Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
uint32 flow) uint32 flow, const EncapsulationStack* arg_encap)
{ {
sessions = s; sessions = s;
key = k; key = k;
@ -160,6 +161,11 @@ Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
uid = 0; // Will set later. uid = 0; // Will set later.
if ( arg_encap )
encapsulation = new EncapsulationStack(*arg_encap);
else
encapsulation = 0;
if ( conn_timer_mgr ) if ( conn_timer_mgr )
{ {
++external_connections; ++external_connections;
@ -187,12 +193,40 @@ Connection::~Connection()
delete key; delete key;
delete root_analyzer; delete root_analyzer;
delete conn_timer_mgr; delete conn_timer_mgr;
delete encapsulation;
--current_connections; --current_connections;
if ( conn_timer_mgr ) if ( conn_timer_mgr )
--external_connections; --external_connections;
} }
void Connection::CheckEncapsulation(const EncapsulationStack* arg_encap)
{
if ( encapsulation && arg_encap )
{
if ( *encapsulation != *arg_encap )
{
Event(tunnel_changed, 0, arg_encap->GetVectorVal());
delete encapsulation;
encapsulation = new EncapsulationStack(*arg_encap);
}
}
else if ( encapsulation )
{
EncapsulationStack empty;
Event(tunnel_changed, 0, empty.GetVectorVal());
delete encapsulation;
encapsulation = 0;
}
else if ( arg_encap )
{
Event(tunnel_changed, 0, arg_encap->GetVectorVal());
encapsulation = new EncapsulationStack(*arg_encap);
}
}
void Connection::Done() void Connection::Done()
{ {
finished = 1; finished = 1;
@ -349,6 +383,9 @@ RecordVal* Connection::BuildConnVal()
char tmp[20]; char tmp[20];
conn_val->Assign(9, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62))); conn_val->Assign(9, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62)));
if ( encapsulation && encapsulation->Depth() > 0 )
conn_val->Assign(10, encapsulation->GetVectorVal());
} }
if ( root_analyzer ) if ( root_analyzer )

View file

@ -13,6 +13,7 @@
#include "RuleMatcher.h" #include "RuleMatcher.h"
#include "AnalyzerTags.h" #include "AnalyzerTags.h"
#include "IPAddr.h" #include "IPAddr.h"
#include "TunnelEncapsulation.h"
class Connection; class Connection;
class ConnectionTimer; class ConnectionTimer;
@ -51,9 +52,16 @@ class Analyzer;
class Connection : public BroObj { class Connection : public BroObj {
public: public:
Connection(NetSessions* s, HashKey* k, double t, const ConnID* id, Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
uint32 flow); uint32 flow, const EncapsulationStack* arg_encap);
virtual ~Connection(); virtual ~Connection();
// Invoked when an encapsulation is discovered. It records the
// encapsulation with the connection and raises a "tunnel_changed"
// event if it's different from the previous encapsulation (or the
// first encountered). encap can be null to indicate no
// encapsulation.
void CheckEncapsulation(const EncapsulationStack* encap);
// Invoked when connection is about to be removed. Use Ref(this) // Invoked when connection is about to be removed. Use Ref(this)
// inside Done to keep the connection object around (though it'll // inside Done to keep the connection object around (though it'll
// no longer be accessible from the dictionary of active // no longer be accessible from the dictionary of active
@ -242,6 +250,11 @@ public:
void SetUID(uint64 arg_uid) { uid = arg_uid; } void SetUID(uint64 arg_uid) { uid = arg_uid; }
uint64 GetUID() const { return uid; }
const EncapsulationStack* GetEncapsulation() const
{ return encapsulation; }
void CheckFlowLabel(bool is_orig, uint32 flow_label); void CheckFlowLabel(bool is_orig, uint32 flow_label);
protected: protected:
@ -279,6 +292,7 @@ protected:
double inactivity_timeout; double inactivity_timeout;
RecordVal* conn_val; RecordVal* conn_val;
LoginConn* login_conn; // either nil, or this LoginConn* login_conn; // either nil, or this
const EncapsulationStack* encapsulation; // tunnels
int suppress_event; // suppress certain events to once per conn. int suppress_event; // suppress certain events to once per conn.
unsigned int installed_status_timer:1; unsigned int installed_status_timer:1;

View file

@ -572,8 +572,9 @@ void BroFile::InstallRotateTimer()
const char* base_time = log_rotate_base_time ? const char* base_time = log_rotate_base_time ?
log_rotate_base_time->AsString()->CheckString() : 0; log_rotate_base_time->AsString()->CheckString() : 0;
double base = parse_rotate_base_time(base_time);
double delta_t = double delta_t =
calc_next_rotate(rotate_interval, base_time); calc_next_rotate(network_time, rotate_interval, base);
rotate_timer = new RotateTimer(network_time + delta_t, rotate_timer = new RotateTimer(network_time + delta_t,
this, true); this, true);
} }

View file

@ -329,7 +329,17 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
bodies[i].stmts->GetLocationInfo()); bodies[i].stmts->GetLocationInfo());
Unref(result); Unref(result);
result = bodies[i].stmts->Exec(f, flow);
try
{
result = bodies[i].stmts->Exec(f, flow);
}
catch ( InterpreterException& e )
{
// Already reported, but we continue exec'ing remaining bodies.
continue;
}
if ( f->HasDelayed() ) if ( f->HasDelayed() )
{ {

View file

@ -64,7 +64,8 @@ void ICMP_Analyzer::DeliverPacket(int len, const u_char* data,
break; break;
default: default:
reporter->InternalError("unexpected IP proto in ICMP analyzer"); reporter->InternalError("unexpected IP proto in ICMP analyzer: %d",
ip->NextProto());
break; break;
} }
@ -168,8 +169,10 @@ void ICMP_Analyzer::NextICMP6(double t, const struct icmp* icmpp, int len, int c
NeighborSolicit(t, icmpp, len, caplen, data, ip_hdr); NeighborSolicit(t, icmpp, len, caplen, data, ip_hdr);
break; break;
case ND_ROUTER_SOLICIT: case ND_ROUTER_SOLICIT:
RouterSolicit(t, icmpp, len, caplen, data, ip_hdr);
break;
case ICMP6_ROUTER_RENUMBERING: case ICMP6_ROUTER_RENUMBERING:
Router(t, icmpp, len, caplen, data, ip_hdr); ICMPEvent(icmp_sent, icmpp, len, 1, ip_hdr);
break; break;
#if 0 #if 0
@ -514,10 +517,13 @@ void ICMP_Analyzer::RouterAdvert(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_router_advertisement; EventHandlerPtr f = icmp_router_advertisement;
uint32 reachable, retrans; uint32 reachable = 0, retrans = 0;
memcpy(&reachable, data, sizeof(reachable)); if ( caplen >= (int)sizeof(reachable) )
memcpy(&retrans, data + sizeof(reachable), sizeof(retrans)); memcpy(&reachable, data, sizeof(reachable));
if ( caplen >= (int)sizeof(reachable) + (int)sizeof(retrans) )
memcpy(&retrans, data + sizeof(reachable), sizeof(retrans));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
@ -533,6 +539,9 @@ void ICMP_Analyzer::RouterAdvert(double t, const struct icmp* icmpp, int len,
vl->append(new IntervalVal((double)ntohl(reachable), Milliseconds)); vl->append(new IntervalVal((double)ntohl(reachable), Milliseconds));
vl->append(new IntervalVal((double)ntohl(retrans), Milliseconds)); vl->append(new IntervalVal((double)ntohl(retrans), Milliseconds));
int opt_offset = sizeof(reachable) + sizeof(retrans);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -541,9 +550,10 @@ void ICMP_Analyzer::NeighborAdvert(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_neighbor_advertisement; EventHandlerPtr f = icmp_neighbor_advertisement;
in6_addr tgtaddr; IPAddr tgtaddr;
memcpy(&tgtaddr.s6_addr, data, sizeof(tgtaddr.s6_addr)); if ( caplen >= (int)sizeof(in6_addr) )
tgtaddr = IPAddr(*((const in6_addr*)data));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
@ -551,7 +561,10 @@ void ICMP_Analyzer::NeighborAdvert(double t, const struct icmp* icmpp, int len,
vl->append(new Val(icmpp->icmp_num_addrs & 0x80, TYPE_BOOL)); // Router vl->append(new Val(icmpp->icmp_num_addrs & 0x80, TYPE_BOOL)); // Router
vl->append(new Val(icmpp->icmp_num_addrs & 0x40, TYPE_BOOL)); // Solicited vl->append(new Val(icmpp->icmp_num_addrs & 0x40, TYPE_BOOL)); // Solicited
vl->append(new Val(icmpp->icmp_num_addrs & 0x20, TYPE_BOOL)); // Override vl->append(new Val(icmpp->icmp_num_addrs & 0x20, TYPE_BOOL)); // Override
vl->append(new AddrVal(IPAddr(tgtaddr))); vl->append(new AddrVal(tgtaddr));
int opt_offset = sizeof(in6_addr);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -561,14 +574,18 @@ void ICMP_Analyzer::NeighborSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_neighbor_solicitation; EventHandlerPtr f = icmp_neighbor_solicitation;
in6_addr tgtaddr; IPAddr tgtaddr;
memcpy(&tgtaddr.s6_addr, data, sizeof(tgtaddr.s6_addr)); if ( caplen >= (int)sizeof(in6_addr) )
tgtaddr = IPAddr(*((const in6_addr*)data));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr));
vl->append(new AddrVal(IPAddr(tgtaddr))); vl->append(new AddrVal(tgtaddr));
int opt_offset = sizeof(in6_addr);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -578,40 +595,36 @@ void ICMP_Analyzer::Redirect(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_redirect; EventHandlerPtr f = icmp_redirect;
in6_addr tgtaddr, dstaddr; IPAddr tgtaddr, dstaddr;
memcpy(&tgtaddr.s6_addr, data, sizeof(tgtaddr.s6_addr)); if ( caplen >= (int)sizeof(in6_addr) )
memcpy(&dstaddr.s6_addr, data + sizeof(tgtaddr.s6_addr), sizeof(dstaddr.s6_addr)); tgtaddr = IPAddr(*((const in6_addr*)data));
if ( caplen >= 2 * (int)sizeof(in6_addr) )
dstaddr = IPAddr(*((const in6_addr*)(data + sizeof(in6_addr))));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr));
vl->append(new AddrVal(IPAddr(tgtaddr))); vl->append(new AddrVal(tgtaddr));
vl->append(new AddrVal(IPAddr(dstaddr))); vl->append(new AddrVal(dstaddr));
int opt_offset = 2 * sizeof(in6_addr);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
void ICMP_Analyzer::Router(double t, const struct icmp* icmpp, int len, void ICMP_Analyzer::RouterSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = 0; EventHandlerPtr f = icmp_router_solicitation;
switch ( icmpp->icmp_type )
{
case ND_ROUTER_SOLICIT:
f = icmp_router_solicitation;
break;
case ICMP6_ROUTER_RENUMBERING:
default:
ICMPEvent(icmp_sent, icmpp, len, 1, ip_hdr);
return;
}
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr));
vl->append(BuildNDOptionsVal(caplen, data));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -684,6 +697,144 @@ void ICMP_Analyzer::Context6(double t, const struct icmp* icmpp,
} }
} }
VectorVal* ICMP_Analyzer::BuildNDOptionsVal(int caplen, const u_char* data)
{
static RecordType* icmp6_nd_option_type = 0;
static RecordType* icmp6_nd_prefix_info_type = 0;
if ( ! icmp6_nd_option_type )
{
icmp6_nd_option_type = internal_type("icmp6_nd_option")->AsRecordType();
icmp6_nd_prefix_info_type =
internal_type("icmp6_nd_prefix_info")->AsRecordType();
}
VectorVal* vv = new VectorVal(
internal_type("icmp6_nd_options")->AsVectorType());
while ( caplen > 0 )
{
// Must have at least type & length to continue parsing options.
if ( caplen < 2 )
{
Weird("truncated_ICMPv6_ND_options");
break;
}
uint8 type = *((const uint8*)data);
uint8 length = *((const uint8*)(data + 1));
if ( length == 0 )
{
Weird("zero_length_ICMPv6_ND_option");
break;
}
RecordVal* rv = new RecordVal(icmp6_nd_option_type);
rv->Assign(0, new Val(type, TYPE_COUNT));
rv->Assign(1, new Val(length, TYPE_COUNT));
// Adjust length to be in units of bytes, exclude type/length fields.
length = length * 8 - 2;
data += 2;
caplen -= 2;
bool set_payload_field = false;
// Only parse out known options that are there in full.
switch ( type ) {
case 1:
case 2:
// Source/Target Link-layer Address option
{
if ( caplen >= length )
{
BroString* link_addr = new BroString(data, length, 0);
rv->Assign(2, new StringVal(link_addr));
}
else
set_payload_field = true;
break;
}
case 3:
// Prefix Information option
{
if ( caplen >= 30 )
{
RecordVal* info = new RecordVal(icmp6_nd_prefix_info_type);
uint8 prefix_len = *((const uint8*)(data));
bool L_flag = (*((const uint8*)(data + 1)) & 0x80) != 0;
bool A_flag = (*((const uint8*)(data + 1)) & 0x40) != 0;
uint32 valid_life = *((const uint32*)(data + 2));
uint32 prefer_life = *((const uint32*)(data + 6));
in6_addr prefix = *((const in6_addr*)(data + 14));
info->Assign(0, new Val(prefix_len, TYPE_COUNT));
info->Assign(1, new Val(L_flag, TYPE_BOOL));
info->Assign(2, new Val(A_flag, TYPE_BOOL));
info->Assign(3, new IntervalVal((double)ntohl(valid_life), Seconds));
info->Assign(4, new IntervalVal((double)ntohl(prefer_life), Seconds));
info->Assign(5, new AddrVal(IPAddr(prefix)));
rv->Assign(3, info);
}
else
set_payload_field = true;
break;
}
case 4:
// Redirected Header option
{
if ( caplen >= length )
{
const u_char* hdr = data + 6;
rv->Assign(4, ExtractICMP6Context(length - 6, hdr));
}
else
set_payload_field = true;
break;
}
case 5:
// MTU option
{
if ( caplen >= 6 )
rv->Assign(5, new Val(ntohl(*((const uint32*)(data + 2))),
TYPE_COUNT));
else
set_payload_field = true;
break;
}
default:
{
set_payload_field = true;
break;
}
}
if ( set_payload_field )
{
BroString* payload =
new BroString(data, min((int)length, caplen), 0);
rv->Assign(6, new StringVal(payload));
}
data += length;
caplen -= length;
vv->Assign(vv->Size(), rv, 0);
}
return vv;
}
int ICMP4_counterpart(int icmp_type, int icmp_code, bool& is_one_way) int ICMP4_counterpart(int icmp_type, int icmp_code, bool& is_one_way)
{ {
is_one_way = false; is_one_way = false;

View file

@ -48,7 +48,7 @@ protected:
int caplen, const u_char*& data, const IP_Hdr* ip_hdr); int caplen, const u_char*& data, const IP_Hdr* ip_hdr);
void NeighborSolicit(double t, const struct icmp* icmpp, int len, void NeighborSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr); int caplen, const u_char*& data, const IP_Hdr* ip_hdr);
void Router(double t, const struct icmp* icmpp, int len, void RouterSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr); int caplen, const u_char*& data, const IP_Hdr* ip_hdr);
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
@ -75,6 +75,9 @@ protected:
void Context6(double t, const struct icmp* icmpp, int len, int caplen, void Context6(double t, const struct icmp* icmpp, int len, int caplen,
const u_char*& data, const IP_Hdr* ip_hdr); const u_char*& data, const IP_Hdr* ip_hdr);
// RFC 4861 Neighbor Discover message options
VectorVal* BuildNDOptionsVal(int caplen, const u_char* data);
RecordVal* icmp_conn_val; RecordVal* icmp_conn_val;
int type; int type;
int code; int code;

View file

@ -31,7 +31,6 @@ int tcp_SYN_ack_ok;
int tcp_match_undelivered; int tcp_match_undelivered;
int encap_hdr_size; int encap_hdr_size;
int udp_tunnel_port;
double frag_timeout; double frag_timeout;
@ -49,6 +48,8 @@ int tcp_excessive_data_without_further_acks;
RecordType* x509_type; RecordType* x509_type;
RecordType* socks_address;
double non_analyzed_lifetime; double non_analyzed_lifetime;
double tcp_inactivity_timeout; double tcp_inactivity_timeout;
double udp_inactivity_timeout; double udp_inactivity_timeout;
@ -328,8 +329,6 @@ void init_net_var()
encap_hdr_size = opt_internal_int("encap_hdr_size"); encap_hdr_size = opt_internal_int("encap_hdr_size");
udp_tunnel_port = opt_internal_int("udp_tunnel_port") & ~UDP_PORT_MASK;
frag_timeout = opt_internal_double("frag_timeout"); frag_timeout = opt_internal_double("frag_timeout");
tcp_SYN_timeout = opt_internal_double("tcp_SYN_timeout"); tcp_SYN_timeout = opt_internal_double("tcp_SYN_timeout");
@ -347,6 +346,8 @@ void init_net_var()
opt_internal_int("tcp_excessive_data_without_further_acks"); opt_internal_int("tcp_excessive_data_without_further_acks");
x509_type = internal_type("X509")->AsRecordType(); x509_type = internal_type("X509")->AsRecordType();
socks_address = internal_type("SOCKS::Address")->AsRecordType();
non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime"); non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime");
tcp_inactivity_timeout = opt_internal_double("tcp_inactivity_timeout"); tcp_inactivity_timeout = opt_internal_double("tcp_inactivity_timeout");

View file

@ -34,7 +34,6 @@ extern int tcp_SYN_ack_ok;
extern int tcp_match_undelivered; extern int tcp_match_undelivered;
extern int encap_hdr_size; extern int encap_hdr_size;
extern int udp_tunnel_port;
extern double frag_timeout; extern double frag_timeout;
@ -52,6 +51,8 @@ extern int tcp_excessive_data_without_further_acks;
extern RecordType* x509_type; extern RecordType* x509_type;
extern RecordType* socks_address;
extern double non_analyzed_lifetime; extern double non_analyzed_lifetime;
extern double tcp_inactivity_timeout; extern double tcp_inactivity_timeout;
extern double udp_inactivity_timeout; extern double udp_inactivity_timeout;

View file

@ -193,7 +193,18 @@ void PktSrc::Process()
{ {
protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0]; protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0];
if ( protocol != AF_INET && protocol != AF_INET6 ) // From the Wireshark Wiki: "AF_INET6, unfortunately, has
// different values in {NetBSD,OpenBSD,BSD/OS},
// {FreeBSD,DragonFlyBSD}, and {Darwin/Mac OS X}, so an IPv6
// packet might have a link-layer header with 24, 28, or 30
// as the AF_ value." As we may be reading traces captured on
// platforms other than what we're running on, we accept them
// all here.
if ( protocol != AF_INET
&& protocol != AF_INET6
&& protocol != 24
&& protocol != 28
&& protocol != 30 )
{ {
sessions->Weird("non_ip_packet_in_null_transport", &hdr, data); sessions->Weird("non_ip_packet_in_null_transport", &hdr, data);
data = 0; data = 0;

View file

@ -2503,17 +2503,17 @@ bool RemoteSerializer::ProcessRemotePrint()
return true; return true;
} }
bool RemoteSerializer::SendLogCreateWriter(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields) bool RemoteSerializer::SendLogCreateWriter(EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields)
{ {
loop_over_list(peers, i) loop_over_list(peers, i)
{ {
SendLogCreateWriter(peers[i]->id, id, writer, path, num_fields, fields); SendLogCreateWriter(peers[i]->id, id, writer, info, num_fields, fields);
} }
return true; return true;
} }
bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields) bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields)
{ {
SetErrorDescr("logging"); SetErrorDescr("logging");
@ -2535,8 +2535,8 @@ bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal*
bool success = fmt.Write(id->AsEnum(), "id") && bool success = fmt.Write(id->AsEnum(), "id") &&
fmt.Write(writer->AsEnum(), "writer") && fmt.Write(writer->AsEnum(), "writer") &&
fmt.Write(path, "path") && fmt.Write(num_fields, "num_fields") &&
fmt.Write(num_fields, "num_fields"); info.Write(&fmt);
if ( ! success ) if ( ! success )
goto error; goto error;
@ -2691,13 +2691,13 @@ bool RemoteSerializer::ProcessLogCreateWriter()
fmt.StartRead(current_args->data, current_args->len); fmt.StartRead(current_args->data, current_args->len);
int id, writer; int id, writer;
string path;
int num_fields; int num_fields;
logging::WriterBackend::WriterInfo info;
bool success = fmt.Read(&id, "id") && bool success = fmt.Read(&id, "id") &&
fmt.Read(&writer, "writer") && fmt.Read(&writer, "writer") &&
fmt.Read(&path, "path") && fmt.Read(&num_fields, "num_fields") &&
fmt.Read(&num_fields, "num_fields"); info.Read(&fmt);
if ( ! success ) if ( ! success )
goto error; goto error;
@ -2716,7 +2716,7 @@ bool RemoteSerializer::ProcessLogCreateWriter()
id_val = new EnumVal(id, BifType::Enum::Log::ID); id_val = new EnumVal(id, BifType::Enum::Log::ID);
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer); writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
if ( ! log_mgr->CreateWriter(id_val, writer_val, path, num_fields, fields, true, false) ) if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields, true, false) )
goto error; goto error;
Unref(id_val); Unref(id_val);
@ -4208,32 +4208,38 @@ bool SocketComm::Listen()
bool SocketComm::AcceptConnection(int fd) bool SocketComm::AcceptConnection(int fd)
{ {
sockaddr_storage client; union {
socklen_t len = sizeof(client); sockaddr_storage ss;
sockaddr_in s4;
sockaddr_in6 s6;
} client;
int clientfd = accept(fd, (sockaddr*) &client, &len); socklen_t len = sizeof(client.ss);
int clientfd = accept(fd, (sockaddr*) &client.ss, &len);
if ( clientfd < 0 ) if ( clientfd < 0 )
{ {
Error(fmt("accept failed, %s %d", strerror(errno), errno)); Error(fmt("accept failed, %s %d", strerror(errno), errno));
return false; return false;
} }
if ( client.ss_family != AF_INET && client.ss_family != AF_INET6 ) if ( client.ss.ss_family != AF_INET && client.ss.ss_family != AF_INET6 )
{ {
Error(fmt("accept fail, unknown address family %d", client.ss_family)); Error(fmt("accept fail, unknown address family %d",
client.ss.ss_family));
close(clientfd); close(clientfd);
return false; return false;
} }
Peer* peer = new Peer; Peer* peer = new Peer;
peer->id = id_counter++; peer->id = id_counter++;
peer->ip = client.ss_family == AF_INET ? peer->ip = client.ss.ss_family == AF_INET ?
IPAddr(((sockaddr_in*)&client)->sin_addr) : IPAddr(client.s4.sin_addr) :
IPAddr(((sockaddr_in6*)&client)->sin6_addr); IPAddr(client.s6.sin6_addr);
peer->port = client.ss_family == AF_INET ? peer->port = client.ss.ss_family == AF_INET ?
ntohs(((sockaddr_in*)&client)->sin_port) : ntohs(client.s4.sin_port) :
ntohs(((sockaddr_in6*)&client)->sin6_port); ntohs(client.s6.sin6_port);
peer->connected = true; peer->connected = true;
peer->ssl = listen_ssl; peer->ssl = listen_ssl;

View file

@ -9,6 +9,7 @@
#include "IOSource.h" #include "IOSource.h"
#include "Stats.h" #include "Stats.h"
#include "File.h" #include "File.h"
#include "logging/WriterBackend.h"
#include <vector> #include <vector>
#include <string> #include <string>
@ -104,10 +105,10 @@ public:
bool SendPrintHookEvent(BroFile* f, const char* txt, size_t len); bool SendPrintHookEvent(BroFile* f, const char* txt, size_t len);
// Send a request to create a writer on a remote side. // Send a request to create a writer on a remote side.
bool SendLogCreateWriter(PeerID peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields); bool SendLogCreateWriter(PeerID peer, EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields);
// Broadcasts a request to create a writer. // Broadcasts a request to create a writer.
bool SendLogCreateWriter(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields); bool SendLogCreateWriter(EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields);
// Broadcast a log entry to everybody interested. // Broadcast a log entry to everybody interested.
bool SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals); bool SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals);

79
src/SOCKS.cc Normal file
View file

@ -0,0 +1,79 @@
#include "SOCKS.h"
#include "socks_pac.h"
#include "TCP_Reassembler.h"
SOCKS_Analyzer::SOCKS_Analyzer(Connection* conn)
: TCP_ApplicationAnalyzer(AnalyzerTag::SOCKS, conn)
{
interp = new binpac::SOCKS::SOCKS_Conn(this);
orig_done = resp_done = false;
pia = 0;
}
SOCKS_Analyzer::~SOCKS_Analyzer()
{
delete interp;
}
void SOCKS_Analyzer::EndpointDone(bool orig)
{
if ( orig )
orig_done = true;
else
resp_done = true;
}
void SOCKS_Analyzer::Done()
{
TCP_ApplicationAnalyzer::Done();
interp->FlowEOF(true);
interp->FlowEOF(false);
}
void SOCKS_Analyzer::EndpointEOF(TCP_Reassembler* endp)
{
TCP_ApplicationAnalyzer::EndpointEOF(endp);
interp->FlowEOF(endp->IsOrig());
}
void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{
TCP_ApplicationAnalyzer::DeliverStream(len, data, orig);
assert(TCP());
if ( TCP()->IsPartial() )
// punt on partial.
return;
if ( orig_done && resp_done )
{
// Finished decapsulating tunnel layer. Now do standard processing
// with the rest of the conneciton.
//
// Note that we assume that no payload data arrives before both endpoints
// are done with there part of the SOCKS protocol.
if ( ! pia )
{
pia = new PIA_TCP(Conn());
AddChildAnalyzer(pia);
pia->FirstPacket(true, 0);
pia->FirstPacket(false, 0);
}
ForwardStream(len, data, orig);
}
else
{
interp->NewData(orig, data, data + len);
}
}
void SOCKS_Analyzer::Undelivered(int seq, int len, bool orig)
{
TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);
interp->NewGap(orig, len);
}

45
src/SOCKS.h Normal file
View file

@ -0,0 +1,45 @@
#ifndef socks_h
#define socks_h
// SOCKS v4 analyzer.
#include "TCP.h"
#include "PIA.h"
namespace binpac {
namespace SOCKS {
class SOCKS_Conn;
}
}
class SOCKS_Analyzer : public TCP_ApplicationAnalyzer {
public:
SOCKS_Analyzer(Connection* conn);
~SOCKS_Analyzer();
void EndpointDone(bool orig);
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp);
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new SOCKS_Analyzer(conn); }
static bool Available()
{
return socks_request || socks_reply;
}
protected:
bool orig_done;
bool resp_done;
PIA_TCP *pia;
binpac::SOCKS::SOCKS_Conn* interp;
};
#endif

View file

@ -30,6 +30,7 @@
#include "DPM.h" #include "DPM.h"
#include "PacketSort.h" #include "PacketSort.h"
#include "TunnelEncapsulation.h"
// These represent NetBIOS services on ephemeral ports. They're numbered // These represent NetBIOS services on ephemeral ports. They're numbered
// so that we can use a single int to hold either an actual TCP/UDP server // so that we can use a single int to hold either an actual TCP/UDP server
@ -67,6 +68,26 @@ void TimerMgrExpireTimer::Dispatch(double t, int is_expire)
} }
} }
void IPTunnelTimer::Dispatch(double t, int is_expire)
{
NetSessions::IPTunnelMap::const_iterator it =
sessions->ip_tunnels.find(tunnel_idx);
if ( it == sessions->ip_tunnels.end() )
return;
double last_active = it->second.second;
double inactive_time = t > last_active ? t - last_active : 0;
if ( inactive_time >= BifConst::Tunnel::ip_tunnel_timeout )
// tunnel activity timed out, delete it from map
sessions->ip_tunnels.erase(tunnel_idx);
else if ( ! is_expire )
// tunnel activity didn't timeout, schedule another timer
timer_mgr->Add(new IPTunnelTimer(t, tunnel_idx));
}
NetSessions::NetSessions() NetSessions::NetSessions()
{ {
TypeList* t = new TypeList(); TypeList* t = new TypeList();
@ -142,16 +163,6 @@ void NetSessions::Done()
{ {
} }
namespace // private namespace
{
bool looks_like_IPv4_packet(int len, const struct ip* ip_hdr)
{
if ( len < int(sizeof(struct ip)) )
return false;
return ip_hdr->ip_v == 4 && ntohs(ip_hdr->ip_len) == len;
}
}
void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr, void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size, const u_char* pkt, int hdr_size,
PktSrc* src_ps, PacketSortElement* pkt_elem) PktSrc* src_ps, PacketSortElement* pkt_elem)
@ -168,60 +179,8 @@ void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
} }
if ( encap_hdr_size > 0 && ip_data ) if ( encap_hdr_size > 0 && ip_data )
{ // Blanket encapsulation
// We're doing tunnel encapsulation. Check whether there's hdr_size += encap_hdr_size;
// a particular associated port.
//
// Should we discourage the use of encap_hdr_size for UDP
// tunnneling? It is probably better handled by enabling
// BifConst::parse_udp_tunnels instead of specifying a fixed
// encap_hdr_size.
if ( udp_tunnel_port > 0 )
{
ASSERT(ip_hdr);
if ( ip_hdr->ip_p == IPPROTO_UDP )
{
const struct udphdr* udp_hdr =
reinterpret_cast<const struct udphdr*>
(ip_data);
if ( ntohs(udp_hdr->uh_dport) == udp_tunnel_port )
{
// A match.
hdr_size += encap_hdr_size;
}
}
}
else
// Blanket encapsulation
hdr_size += encap_hdr_size;
}
// Check IP packets encapsulated through UDP tunnels.
// Specifying a udp_tunnel_port is optional but recommended (to avoid
// the cost of checking every UDP packet).
else if ( BifConst::parse_udp_tunnels && ip_data && ip_hdr->ip_p == IPPROTO_UDP )
{
const struct udphdr* udp_hdr =
reinterpret_cast<const struct udphdr*>(ip_data);
if ( udp_tunnel_port == 0 || // 0 matches any port
udp_tunnel_port == ntohs(udp_hdr->uh_dport) )
{
const u_char* udp_data =
ip_data + sizeof(struct udphdr);
const struct ip* ip_encap =
reinterpret_cast<const struct ip*>(udp_data);
const int ip_encap_len =
ntohs(udp_hdr->uh_ulen) - sizeof(struct udphdr);
const int ip_encap_caplen =
hdr->caplen - (udp_data - pkt);
if ( looks_like_IPv4_packet(ip_encap_len, ip_encap) )
hdr_size = udp_data - pkt;
}
}
if ( src_ps->FilterType() == TYPE_FILTER_NORMAL ) if ( src_ps->FilterType() == TYPE_FILTER_NORMAL )
NextPacket(t, hdr, pkt, hdr_size, pkt_elem); NextPacket(t, hdr, pkt, hdr_size, pkt_elem);
@ -251,7 +210,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
// difference here is that header extraction in // difference here is that header extraction in
// PacketSort does not generate Weird events. // PacketSort does not generate Weird events.
DoNextPacket(t, hdr, pkt_elem->IPHdr(), pkt, hdr_size); DoNextPacket(t, hdr, pkt_elem->IPHdr(), pkt, hdr_size, 0);
else else
{ {
@ -276,7 +235,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
if ( ip->ip_v == 4 ) if ( ip->ip_v == 4 )
{ {
IP_Hdr ip_hdr(ip, false); IP_Hdr ip_hdr(ip, false);
DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size); DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size, 0);
} }
else if ( ip->ip_v == 6 ) else if ( ip->ip_v == 6 )
@ -288,7 +247,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
} }
IP_Hdr ip_hdr((const struct ip6_hdr*) (pkt + hdr_size), false, caplen); IP_Hdr ip_hdr((const struct ip6_hdr*) (pkt + hdr_size), false, caplen);
DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size); DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size, 0);
} }
else if ( ARP_Analyzer::IsARP(pkt, hdr_size) ) else if ( ARP_Analyzer::IsARP(pkt, hdr_size) )
@ -410,7 +369,7 @@ int NetSessions::CheckConnectionTag(Connection* conn)
void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
const IP_Hdr* ip_hdr, const u_char* const pkt, const IP_Hdr* ip_hdr, const u_char* const pkt,
int hdr_size) int hdr_size, const EncapsulationStack* encapsulation)
{ {
uint32 caplen = hdr->caplen - hdr_size; uint32 caplen = hdr->caplen - hdr_size;
const struct ip* ip4 = ip_hdr->IP4_Hdr(); const struct ip* ip4 = ip_hdr->IP4_Hdr();
@ -418,7 +377,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
uint32 len = ip_hdr->TotalLen(); uint32 len = ip_hdr->TotalLen();
if ( hdr->len < len + hdr_size ) if ( hdr->len < len + hdr_size )
{ {
Weird("truncated_IP", hdr, pkt); Weird("truncated_IP", hdr, pkt, encapsulation);
return; return;
} }
@ -430,7 +389,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
if ( ! ignore_checksums && ip4 && if ( ! ignore_checksums && ip4 &&
ones_complement_checksum((void*) ip4, ip_hdr_len, 0) != 0xffff ) ones_complement_checksum((void*) ip4, ip_hdr_len, 0) != 0xffff )
{ {
Weird("bad_IP_checksum", hdr, pkt); Weird("bad_IP_checksum", hdr, pkt, encapsulation);
return; return;
} }
@ -445,7 +404,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
if ( caplen < len ) if ( caplen < len )
{ {
Weird("incompletely_captured_fragment", ip_hdr); Weird("incompletely_captured_fragment", ip_hdr, encapsulation);
// Don't try to reassemble, that's doomed. // Don't try to reassemble, that's doomed.
// Discard all except the first fragment (which // Discard all except the first fragment (which
@ -472,7 +431,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
len -= ip_hdr_len; // remove IP header len -= ip_hdr_len; // remove IP header
caplen -= ip_hdr_len; caplen -= ip_hdr_len;
// We stop building the chain when seeing IPPROTO_ESP so if it's // We stop building the chain when seeing IPPROTO_ESP so if it's
// there, it's always the last. // there, it's always the last.
if ( ip_hdr->LastHeader() == IPPROTO_ESP ) if ( ip_hdr->LastHeader() == IPPROTO_ESP )
{ {
@ -497,7 +456,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
if ( ! ignore_checksums && mobility_header_checksum(ip_hdr) != 0xffff ) if ( ! ignore_checksums && mobility_header_checksum(ip_hdr) != 0xffff )
{ {
Weird("bad_MH_checksum", hdr, pkt); Weird("bad_MH_checksum", hdr, pkt, encapsulation);
Remove(f); Remove(f);
return; return;
} }
@ -510,7 +469,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
} }
if ( ip_hdr->NextProto() != IPPROTO_NONE ) if ( ip_hdr->NextProto() != IPPROTO_NONE )
Weird("mobility_piggyback", hdr, pkt); Weird("mobility_piggyback", hdr, pkt, encapsulation);
Remove(f); Remove(f);
return; return;
@ -519,7 +478,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
int proto = ip_hdr->NextProto(); int proto = ip_hdr->NextProto();
if ( CheckHeaderTrunc(proto, len, caplen, hdr, pkt) ) if ( CheckHeaderTrunc(proto, len, caplen, hdr, pkt, encapsulation) )
{ {
Remove(f); Remove(f);
return; return;
@ -585,8 +544,83 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
break; break;
} }
case IPPROTO_IPV4:
case IPPROTO_IPV6:
{
if ( ! BifConst::Tunnel::enable_ip )
{
Weird("IP_tunnel", ip_hdr, encapsulation);
Remove(f);
return;
}
if ( encapsulation &&
encapsulation->Depth() >= BifConst::Tunnel::max_depth )
{
Weird("exceeded_tunnel_max_depth", ip_hdr, encapsulation);
Remove(f);
return;
}
// Check for a valid inner packet first.
IP_Hdr* inner = 0;
int result = ParseIPPacket(caplen, data, proto, inner);
if ( result < 0 )
Weird("truncated_inner_IP", ip_hdr, encapsulation);
else if ( result > 0 )
Weird("inner_IP_payload_length_mismatch", ip_hdr, encapsulation);
if ( result != 0 )
{
delete inner;
Remove(f);
return;
}
// Look up to see if we've already seen this IP tunnel, identified
// by the pair of IP addresses, so that we can always associate the
// same UID with it.
IPPair tunnel_idx;
if ( ip_hdr->SrcAddr() < ip_hdr->DstAddr() )
tunnel_idx = IPPair(ip_hdr->SrcAddr(), ip_hdr->DstAddr());
else
tunnel_idx = IPPair(ip_hdr->DstAddr(), ip_hdr->SrcAddr());
IPTunnelMap::iterator it = ip_tunnels.find(tunnel_idx);
if ( it == ip_tunnels.end() )
{
EncapsulatingConn ec(ip_hdr->SrcAddr(), ip_hdr->DstAddr());
ip_tunnels[tunnel_idx] = TunnelActivity(ec, network_time);
timer_mgr->Add(new IPTunnelTimer(network_time, tunnel_idx));
}
else
it->second.second = network_time;
DoNextInnerPacket(t, hdr, inner, encapsulation,
ip_tunnels[tunnel_idx].first);
Remove(f);
return;
}
case IPPROTO_NONE:
{
// If the packet is encapsulated in Teredo, then it was a bubble and
// the Teredo analyzer may have raised an event for that, else we're
// not sure the reason for the No Next header in the packet.
if ( ! ( encapsulation &&
encapsulation->LastType() == BifEnum::Tunnel::TEREDO ) )
Weird("ipv6_no_next", hdr, pkt);
Remove(f);
return;
}
default: default:
Weird(fmt("unknown_protocol_%d", proto), hdr, pkt); Weird(fmt("unknown_protocol_%d", proto), hdr, pkt, encapsulation);
Remove(f); Remove(f);
return; return;
} }
@ -602,7 +636,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
conn = (Connection*) d->Lookup(h); conn = (Connection*) d->Lookup(h);
if ( ! conn ) if ( ! conn )
{ {
conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel()); conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation);
if ( conn ) if ( conn )
d->Insert(h, conn); d->Insert(h, conn);
} }
@ -623,12 +657,15 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
conn->Event(connection_reused, 0); conn->Event(connection_reused, 0);
Remove(conn); Remove(conn);
conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel()); conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation);
if ( conn ) if ( conn )
d->Insert(h, conn); d->Insert(h, conn);
} }
else else
{
delete h; delete h;
conn->CheckEncapsulation(encapsulation);
}
} }
if ( ! conn ) if ( ! conn )
@ -682,8 +719,70 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
} }
} }
void NetSessions::DoNextInnerPacket(double t, const struct pcap_pkthdr* hdr,
const IP_Hdr* inner, const EncapsulationStack* prev,
const EncapsulatingConn& ec)
{
struct pcap_pkthdr fake_hdr;
fake_hdr.caplen = fake_hdr.len = inner->TotalLen();
if ( hdr )
fake_hdr.ts = hdr->ts;
else
{
fake_hdr.ts.tv_sec = (time_t) network_time;
fake_hdr.ts.tv_usec = (suseconds_t)
((network_time - (double)fake_hdr.ts.tv_sec) * 1000000);
}
const u_char* pkt = 0;
if ( inner->IP4_Hdr() )
pkt = (const u_char*) inner->IP4_Hdr();
else
pkt = (const u_char*) inner->IP6_Hdr();
EncapsulationStack* outer = prev ?
new EncapsulationStack(*prev) : new EncapsulationStack();
outer->Add(ec);
DoNextPacket(t, &fake_hdr, inner, pkt, 0, outer);
delete inner;
delete outer;
}
int NetSessions::ParseIPPacket(int caplen, const u_char* const pkt, int proto,
IP_Hdr*& inner)
{
if ( proto == IPPROTO_IPV6 )
{
if ( caplen < (int)sizeof(struct ip6_hdr) )
return -1;
inner = new IP_Hdr((const struct ip6_hdr*) pkt, false, caplen);
}
else if ( proto == IPPROTO_IPV4 )
{
if ( caplen < (int)sizeof(struct ip) )
return -1;
inner = new IP_Hdr((const struct ip*) pkt, false);
}
else
reporter->InternalError("Bad IP protocol version in DoNextInnerPacket");
if ( (uint32)caplen != inner->TotalLen() )
return (uint32)caplen < inner->TotalLen() ? -1 : 1;
return 0;
}
bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen, bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
const struct pcap_pkthdr* h, const u_char* p) const struct pcap_pkthdr* h,
const u_char* p, const EncapsulationStack* encap)
{ {
uint32 min_hdr_len = 0; uint32 min_hdr_len = 0;
switch ( proto ) { switch ( proto ) {
@ -693,22 +792,32 @@ bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
case IPPROTO_UDP: case IPPROTO_UDP:
min_hdr_len = sizeof(struct udphdr); min_hdr_len = sizeof(struct udphdr);
break; break;
case IPPROTO_IPV4:
min_hdr_len = sizeof(struct ip);
break;
case IPPROTO_IPV6:
min_hdr_len = sizeof(struct ip6_hdr);
break;
case IPPROTO_NONE:
min_hdr_len = 0;
break;
case IPPROTO_ICMP: case IPPROTO_ICMP:
case IPPROTO_ICMPV6: case IPPROTO_ICMPV6:
default: default:
// Use for all other packets. // Use for all other packets.
min_hdr_len = ICMP_MINLEN; min_hdr_len = ICMP_MINLEN;
break;
} }
if ( len < min_hdr_len ) if ( len < min_hdr_len )
{ {
Weird("truncated_header", h, p); Weird("truncated_header", h, p, encap);
return true; return true;
} }
if ( caplen < min_hdr_len ) if ( caplen < min_hdr_len )
{ {
Weird("internally_truncated_header", h, p); Weird("internally_truncated_header", h, p, encap);
return true; return true;
} }
@ -1004,7 +1113,8 @@ void NetSessions::GetStats(SessionStats& s) const
} }
Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
const u_char* data, int proto, uint32 flow_label) const u_char* data, int proto, uint32 flow_label,
const EncapsulationStack* encapsulation)
{ {
// FIXME: This should be cleaned up a bit, it's too protocol-specific. // FIXME: This should be cleaned up a bit, it's too protocol-specific.
// But I'm not yet sure what the right abstraction for these things is. // But I'm not yet sure what the right abstraction for these things is.
@ -1060,7 +1170,7 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
id = &flip_id; id = &flip_id;
} }
Connection* conn = new Connection(this, k, t, id, flow_label); Connection* conn = new Connection(this, k, t, id, flow_label, encapsulation);
conn->SetTransport(tproto); conn->SetTransport(tproto);
dpm->BuildInitialAnalyzerTree(tproto, conn, data); dpm->BuildInitialAnalyzerTree(tproto, conn, data);
@ -1224,18 +1334,26 @@ void NetSessions::Internal(const char* msg, const struct pcap_pkthdr* hdr,
reporter->InternalError("%s", msg); reporter->InternalError("%s", msg);
} }
void NetSessions::Weird(const char* name, void NetSessions::Weird(const char* name, const struct pcap_pkthdr* hdr,
const struct pcap_pkthdr* hdr, const u_char* pkt) const u_char* pkt, const EncapsulationStack* encap)
{ {
if ( hdr ) if ( hdr )
dump_this_packet = 1; dump_this_packet = 1;
reporter->Weird(name); if ( encap && encap->LastType() != BifEnum::Tunnel::NONE )
reporter->Weird(fmt("%s_in_tunnel", name));
else
reporter->Weird(name);
} }
void NetSessions::Weird(const char* name, const IP_Hdr* ip) void NetSessions::Weird(const char* name, const IP_Hdr* ip,
const EncapsulationStack* encap)
{ {
reporter->Weird(ip->SrcAddr(), ip->DstAddr(), name); if ( encap && encap->LastType() != BifEnum::Tunnel::NONE )
reporter->Weird(ip->SrcAddr(), ip->DstAddr(),
fmt("%s_in_tunnel", name));
else
reporter->Weird(ip->SrcAddr(), ip->DstAddr(), name);
} }
unsigned int NetSessions::ConnectionMemoryUsage() unsigned int NetSessions::ConnectionMemoryUsage()

View file

@ -11,9 +11,12 @@
#include "PacketFilter.h" #include "PacketFilter.h"
#include "Stats.h" #include "Stats.h"
#include "NetVar.h" #include "NetVar.h"
#include "TunnelEncapsulation.h"
#include <utility>
struct pcap_pkthdr; struct pcap_pkthdr;
class EncapsulationStack;
class Connection; class Connection;
class ConnID; class ConnID;
class OSFingerprint; class OSFingerprint;
@ -105,9 +108,10 @@ public:
void GetStats(SessionStats& s) const; void GetStats(SessionStats& s) const;
void Weird(const char* name, void Weird(const char* name, const struct pcap_pkthdr* hdr,
const struct pcap_pkthdr* hdr, const u_char* pkt); const u_char* pkt, const EncapsulationStack* encap = 0);
void Weird(const char* name, const IP_Hdr* ip); void Weird(const char* name, const IP_Hdr* ip,
const EncapsulationStack* encap = 0);
PacketFilter* GetPacketFilter() PacketFilter* GetPacketFilter()
{ {
@ -131,6 +135,51 @@ public:
icmp_conns.Length(); icmp_conns.Length();
} }
void DoNextPacket(double t, const struct pcap_pkthdr* hdr,
const IP_Hdr* ip_hdr, const u_char* const pkt,
int hdr_size, const EncapsulationStack* encapsulation);
/**
* Wrapper that recurses on DoNextPacket for encapsulated IP packets.
*
* @param t Network time.
* @param hdr If the outer pcap header is available, this pointer can be set
* so that the fake pcap header passed to DoNextPacket will use
* the same timeval. The caplen and len fields of the fake pcap
* header are always set to the TotalLength() of \a inner.
* @param inner Pointer to IP header wrapper of the inner packet, ownership
* of the pointer's memory is assumed by this function.
* @param prev Any previous encapsulation stack of the caller, not including
* the most-recently found depth of encapsulation.
* @param ec The most-recently found depth of encapsulation.
*/
void DoNextInnerPacket(double t, const struct pcap_pkthdr* hdr,
const IP_Hdr* inner, const EncapsulationStack* prev,
const EncapsulatingConn& ec);
/**
* Returns a wrapper IP_Hdr object if \a pkt appears to be a valid IPv4
* or IPv6 header based on whether it's long enough to contain such a header
* and also that the payload length field of that header matches the actual
* length of \a pkt given by \a caplen.
*
* @param caplen The length of \a pkt in bytes.
* @param pkt The inner IP packet data.
* @param proto Either IPPROTO_IPV6 or IPPROTO_IPV4 to indicate which IP
* protocol \a pkt corresponds to.
* @param inner The inner IP packet wrapper pointer to be allocated/assigned
* if \a pkt looks like a valid IP packet or at least long enough
* to hold an IP header.
* @return 0 If the inner IP packet appeared valid, else -1 if \a caplen
* is greater than the supposed IP packet's payload length field or
* 1 if \a caplen is less than the supposed packet's payload length.
* In the -1 case, \a inner may still be non-null if \a caplen was
* long enough to be an IP header, and \a inner is always non-null
* for other return values.
*/
int ParseIPPacket(int caplen, const u_char* const pkt, int proto,
IP_Hdr*& inner);
unsigned int ConnectionMemoryUsage(); unsigned int ConnectionMemoryUsage();
unsigned int ConnectionMemoryUsageConnVals(); unsigned int ConnectionMemoryUsageConnVals();
unsigned int MemoryAllocation(); unsigned int MemoryAllocation();
@ -140,9 +189,11 @@ protected:
friend class RemoteSerializer; friend class RemoteSerializer;
friend class ConnCompressor; friend class ConnCompressor;
friend class TimerMgrExpireTimer; friend class TimerMgrExpireTimer;
friend class IPTunnelTimer;
Connection* NewConn(HashKey* k, double t, const ConnID* id, Connection* NewConn(HashKey* k, double t, const ConnID* id,
const u_char* data, int proto, uint32 flow_label); const u_char* data, int proto, uint32 flow_lable,
const EncapsulationStack* encapsulation);
// Check whether the tag of the current packet is consistent with // Check whether the tag of the current packet is consistent with
// the given connection. Returns: // the given connection. Returns:
@ -173,10 +224,6 @@ protected:
const u_char* const pkt, int hdr_size, const u_char* const pkt, int hdr_size,
PacketSortElement* pkt_elem); PacketSortElement* pkt_elem);
void DoNextPacket(double t, const struct pcap_pkthdr* hdr,
const IP_Hdr* ip_hdr, const u_char* const pkt,
int hdr_size);
void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr, void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size, const u_char* const pkt, int hdr_size,
const PktSrc* src_ps); const PktSrc* src_ps);
@ -194,7 +241,8 @@ protected:
// from lower-level headers or the length actually captured is less // from lower-level headers or the length actually captured is less
// than that protocol's minimum header size. // than that protocol's minimum header size.
bool CheckHeaderTrunc(int proto, uint32 len, uint32 caplen, bool CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
const struct pcap_pkthdr* hdr, const u_char* pkt); const struct pcap_pkthdr* hdr, const u_char* pkt,
const EncapsulationStack* encap);
CompositeHash* ch; CompositeHash* ch;
PDict(Connection) tcp_conns; PDict(Connection) tcp_conns;
@ -202,6 +250,11 @@ protected:
PDict(Connection) icmp_conns; PDict(Connection) icmp_conns;
PDict(FragReassembler) fragments; PDict(FragReassembler) fragments;
typedef pair<IPAddr, IPAddr> IPPair;
typedef pair<EncapsulatingConn, double> TunnelActivity;
typedef std::map<IPPair, TunnelActivity> IPTunnelMap;
IPTunnelMap ip_tunnels;
ARP_Analyzer* arp_analyzer; ARP_Analyzer* arp_analyzer;
SteppingStoneManager* stp_manager; SteppingStoneManager* stp_manager;
@ -219,6 +272,21 @@ protected:
TimerMgrMap timer_mgrs; TimerMgrMap timer_mgrs;
}; };
class IPTunnelTimer : public Timer {
public:
IPTunnelTimer(double t, NetSessions::IPPair p)
: Timer(t + BifConst::Tunnel::ip_tunnel_timeout,
TIMER_IP_TUNNEL_INACTIVITY), tunnel_idx(p) {}
~IPTunnelTimer() {}
void Dispatch(double t, int is_expire);
protected:
NetSessions::IPPair tunnel_idx;
};
// Manager for the currently active sessions. // Manager for the currently active sessions.
extern NetSessions* sessions; extern NetSessions* sessions;

246
src/Teredo.cc Normal file
View file

@ -0,0 +1,246 @@
#include "Teredo.h"
#include "IP.h"
#include "Reporter.h"
void Teredo_Analyzer::Done()
{
Analyzer::Done();
Event(udp_session_done);
}
bool TeredoEncapsulation::DoParse(const u_char* data, int& len,
bool found_origin, bool found_auth)
{
if ( len < 2 )
{
Weird("truncated_Teredo");
return false;
}
uint16 tag = ntohs((*((const uint16*)data)));
if ( tag == 0 )
{
// Origin Indication
if ( found_origin )
// can't have multiple origin indications
return false;
if ( len < 8 )
{
Weird("truncated_Teredo_origin_indication");
return false;
}
origin_indication = data;
len -= 8;
data += 8;
return DoParse(data, len, true, found_auth);
}
else if ( tag == 1 )
{
// Authentication
if ( found_origin || found_auth )
// can't have multiple authentication headers and can't come after
// an origin indication
return false;
if ( len < 4 )
{
Weird("truncated_Teredo_authentication");
return false;
}
uint8 id_len = data[2];
uint8 au_len = data[3];
uint16 tot_len = 4 + id_len + au_len + 8 + 1;
if ( len < tot_len )
{
Weird("truncated_Teredo_authentication");
return false;
}
auth = data;
len -= tot_len;
data += tot_len;
return DoParse(data, len, found_origin, true);
}
else if ( ((tag & 0xf000)>>12) == 6 )
{
// IPv6
if ( len < 40 )
{
Weird("truncated_IPv6_in_Teredo");
return false;
}
// There's at least a possible IPv6 header, we'll decide what to do
// later if the payload length field doesn't match the actual length
// of the packet.
inner_ip = data;
return true;
}
return false;
}
RecordVal* TeredoEncapsulation::BuildVal(const IP_Hdr* inner) const
{
static RecordType* teredo_hdr_type = 0;
static RecordType* teredo_auth_type = 0;
static RecordType* teredo_origin_type = 0;
if ( ! teredo_hdr_type )
{
teredo_hdr_type = internal_type("teredo_hdr")->AsRecordType();
teredo_auth_type = internal_type("teredo_auth")->AsRecordType();
teredo_origin_type = internal_type("teredo_origin")->AsRecordType();
}
RecordVal* teredo_hdr = new RecordVal(teredo_hdr_type);
if ( auth )
{
RecordVal* teredo_auth = new RecordVal(teredo_auth_type);
uint8 id_len = *((uint8*)(auth + 2));
uint8 au_len = *((uint8*)(auth + 3));
uint64 nonce = ntohll(*((uint64*)(auth + 4 + id_len + au_len)));
uint8 conf = *((uint8*)(auth + 4 + id_len + au_len + 8));
teredo_auth->Assign(0, new StringVal(
new BroString(auth + 4, id_len, 1)));
teredo_auth->Assign(1, new StringVal(
new BroString(auth + 4 + id_len, au_len, 1)));
teredo_auth->Assign(2, new Val(nonce, TYPE_COUNT));
teredo_auth->Assign(3, new Val(conf, TYPE_COUNT));
teredo_hdr->Assign(0, teredo_auth);
}
if ( origin_indication )
{
RecordVal* teredo_origin = new RecordVal(teredo_origin_type);
uint16 port = ntohs(*((uint16*)(origin_indication + 2))) ^ 0xFFFF;
uint32 addr = ntohl(*((uint32*)(origin_indication + 4))) ^ 0xFFFFFFFF;
teredo_origin->Assign(0, new PortVal(port, TRANSPORT_UDP));
teredo_origin->Assign(1, new AddrVal(htonl(addr)));
teredo_hdr->Assign(1, teredo_origin);
}
teredo_hdr->Assign(2, inner->BuildPktHdrVal());
return teredo_hdr;
}
void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen)
{
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
TeredoEncapsulation te(this);
if ( ! te.Parse(data, len) )
{
ProtocolViolation("Bad Teredo encapsulation", (const char*) data, len);
return;
}
const EncapsulationStack* e = Conn()->GetEncapsulation();
if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
{
Weird("tunnel_depth");
return;
}
IP_Hdr* inner = 0;
int rslt = sessions->ParseIPPacket(len, te.InnerIP(), IPPROTO_IPV6, inner);
if ( rslt > 0 )
{
if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 )
// Teredo bubbles having data after IPv6 header isn't strictly a
// violation, but a little weird.
Weird("Teredo_bubble_with_payload");
else
{
delete inner;
ProtocolViolation("Teredo payload length", (const char*) data, len);
return;
}
}
if ( rslt == 0 || rslt > 0 )
{
if ( BifConst::Tunnel::yielding_teredo_decapsulation &&
! ProtocolConfirmed() )
{
// Only confirm the Teredo tunnel and start decapsulating packets
// when no other sibling analyzer thinks it's already parsing the
// right protocol.
bool sibling_has_confirmed = false;
if ( Parent() )
{
LOOP_OVER_GIVEN_CONST_CHILDREN(i, Parent()->GetChildren())
{
if ( (*i)->ProtocolConfirmed() )
{
sibling_has_confirmed = true;
break;
}
}
}
if ( ! sibling_has_confirmed )
ProtocolConfirmation();
else
{
delete inner;
return;
}
}
else
{
// Aggressively decapsulate anything with valid Teredo encapsulation
ProtocolConfirmation();
}
}
else
{
delete inner;
ProtocolViolation("Truncated Teredo", (const char*) data, len);
return;
}
Val* teredo_hdr = 0;
if ( teredo_packet )
{
teredo_hdr = te.BuildVal(inner);
Conn()->Event(teredo_packet, 0, teredo_hdr);
}
if ( te.Authentication() && teredo_authentication )
{
teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner);
Conn()->Event(teredo_authentication, 0, teredo_hdr);
}
if ( te.OriginIndication() && teredo_origin_indication )
{
teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner);
Conn()->Event(teredo_origin_indication, 0, teredo_hdr);
}
if ( inner->NextProto() == IPPROTO_NONE && teredo_bubble )
{
teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner);
Conn()->Event(teredo_bubble, 0, teredo_hdr);
}
EncapsulatingConn ec(Conn(), BifEnum::Tunnel::TEREDO);
sessions->DoNextInnerPacket(network_time, 0, inner, e, ec);
}

79
src/Teredo.h Normal file
View file

@ -0,0 +1,79 @@
#ifndef Teredo_h
#define Teredo_h
#include "Analyzer.h"
#include "NetVar.h"
class Teredo_Analyzer : public Analyzer {
public:
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn)
{}
virtual ~Teredo_Analyzer()
{}
virtual void Done();
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new Teredo_Analyzer(conn); }
static bool Available()
{ return BifConst::Tunnel::enable_teredo &&
BifConst::Tunnel::max_depth > 0; }
/**
* Emits a weird only if the analyzer has previously been able to
* decapsulate a Teredo packet since otherwise the weirds could happen
* frequently enough to be less than helpful.
*/
void Weird(const char* name) const
{
if ( ProtocolConfirmed() )
reporter->Weird(Conn(), name);
}
protected:
friend class AnalyzerTimer;
void ExpireTimer(double t);
};
class TeredoEncapsulation {
public:
TeredoEncapsulation(const Teredo_Analyzer* ta)
: inner_ip(0), origin_indication(0), auth(0), analyzer(ta)
{}
/**
* Returns whether input data parsed as a valid Teredo encapsulation type.
* If it was valid, the len argument is decremented appropriately.
*/
bool Parse(const u_char* data, int& len)
{ return DoParse(data, len, false, false); }
const u_char* InnerIP() const
{ return inner_ip; }
const u_char* OriginIndication() const
{ return origin_indication; }
const u_char* Authentication() const
{ return auth; }
RecordVal* BuildVal(const IP_Hdr* inner) const;
protected:
bool DoParse(const u_char* data, int& len, bool found_orig, bool found_au);
void Weird(const char* name) const
{ analyzer->Weird(name); }
const u_char* inner_ip;
const u_char* origin_indication;
const u_char* auth;
const Teredo_Analyzer* analyzer;
};
#endif

View file

@ -20,6 +20,7 @@ const char* TimerNames[] = {
"IncrementalSendTimer", "IncrementalSendTimer",
"IncrementalWriteTimer", "IncrementalWriteTimer",
"InterconnTimer", "InterconnTimer",
"IPTunnelInactivityTimer",
"NetbiosExpireTimer", "NetbiosExpireTimer",
"NetworkTimer", "NetworkTimer",
"NTPExpireTimer", "NTPExpireTimer",

View file

@ -26,6 +26,7 @@ enum TimerType {
TIMER_INCREMENTAL_SEND, TIMER_INCREMENTAL_SEND,
TIMER_INCREMENTAL_WRITE, TIMER_INCREMENTAL_WRITE,
TIMER_INTERCONN, TIMER_INTERCONN,
TIMER_IP_TUNNEL_INACTIVITY,
TIMER_NB_EXPIRE, TIMER_NB_EXPIRE,
TIMER_NETWORK, TIMER_NETWORK,
TIMER_NTP_EXPIRE, TIMER_NTP_EXPIRE,

View file

@ -0,0 +1,55 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include "TunnelEncapsulation.h"
#include "util.h"
#include "Conn.h"
EncapsulatingConn::EncapsulatingConn(Connection* c, BifEnum::Tunnel::Type t)
: src_addr(c->OrigAddr()), dst_addr(c->RespAddr()),
src_port(c->OrigPort()), dst_port(c->RespPort()),
proto(c->ConnTransport()), type(t), uid(c->GetUID())
{
if ( ! uid )
{
uid = calculate_unique_id();
c->SetUID(uid);
}
}
RecordVal* EncapsulatingConn::GetRecordVal() const
{
RecordVal *rv = new RecordVal(BifType::Record::Tunnel::EncapsulatingConn);
RecordVal* id_val = new RecordVal(conn_id);
id_val->Assign(0, new AddrVal(src_addr));
id_val->Assign(1, new PortVal(ntohs(src_port), proto));
id_val->Assign(2, new AddrVal(dst_addr));
id_val->Assign(3, new PortVal(ntohs(dst_port), proto));
rv->Assign(0, id_val);
rv->Assign(1, new EnumVal(type, BifType::Enum::Tunnel::Type));
char tmp[20];
rv->Assign(2, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62)));
return rv;
}
bool operator==(const EncapsulationStack& e1, const EncapsulationStack& e2)
{
if ( ! e1.conns )
return e2.conns;
if ( ! e2.conns )
return false;
if ( e1.conns->size() != e2.conns->size() )
return false;
for ( size_t i = 0; i < e1.conns->size(); ++i )
{
if ( (*e1.conns)[i] != (*e2.conns)[i] )
return false;
}
return true;
}

208
src/TunnelEncapsulation.h Normal file
View file

@ -0,0 +1,208 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef TUNNELS_H
#define TUNNELS_H
#include "config.h"
#include "NetVar.h"
#include "IPAddr.h"
#include "Val.h"
#include <vector>
class Connection;
/**
* Represents various types of tunnel "connections", that is, a pair of
* endpoints whose communication encapsulates inner IP packets. This could
* mean IP packets nested inside IP packets or IP packets nested inside a
* transport layer protocol. EncapsulatingConn's are assigned a UID, which can
* be shared with Connection's in the case the tunnel uses a transport-layer.
*/
class EncapsulatingConn {
public:
/**
* Default tunnel connection constructor.
*/
EncapsulatingConn()
: src_port(0), dst_port(0), proto(TRANSPORT_UNKNOWN),
type(BifEnum::Tunnel::NONE), uid(0)
{}
/**
* Construct an IP tunnel "connection" with its own UID.
* The assignment of "source" and "destination" addresses here can be
* arbitrary, comparison between EncapsulatingConn objects will treat IP
* tunnels as equivalent as long as the same two endpoints are involved.
*
* @param s The tunnel source address, likely taken from an IP header.
* @param d The tunnel destination address, likely taken from an IP header.
*/
EncapsulatingConn(const IPAddr& s, const IPAddr& d)
: src_addr(s), dst_addr(d), src_port(0), dst_port(0),
proto(TRANSPORT_UNKNOWN), type(BifEnum::Tunnel::IP)
{
uid = calculate_unique_id();
}
/**
* Construct a tunnel connection using information from an already existing
* transport-layer-aware connection object.
*
* @param c The connection from which endpoint information can be extracted.
* If it already has a UID associated with it, that gets inherited,
* otherwise a new UID is created for this tunnel and \a c.
* @param t The type of tunneling that is occurring over the connection.
*/
EncapsulatingConn(Connection* c, BifEnum::Tunnel::Type t);
/**
* Copy constructor.
*/
EncapsulatingConn(const EncapsulatingConn& other)
: src_addr(other.src_addr), dst_addr(other.dst_addr),
src_port(other.src_port), dst_port(other.dst_port),
proto(other.proto), type(other.type), uid(other.uid)
{}
/**
* Destructor.
*/
~EncapsulatingConn()
{}
BifEnum::Tunnel::Type Type() const
{ return type; }
/**
* Returns record value of type "EncapsulatingConn" representing the tunnel.
*/
RecordVal* GetRecordVal() const;
friend bool operator==(const EncapsulatingConn& ec1,
const EncapsulatingConn& ec2)
{
if ( ec1.type != ec2.type )
return false;
if ( ec1.type == BifEnum::Tunnel::IP )
// Reversing endpoints is still same tunnel.
return ec1.uid == ec2.uid && ec1.proto == ec2.proto &&
((ec1.src_addr == ec2.src_addr && ec1.dst_addr == ec2.dst_addr) ||
(ec1.src_addr == ec2.dst_addr && ec1.dst_addr == ec2.src_addr));
return ec1.src_addr == ec2.src_addr && ec1.dst_addr == ec2.dst_addr &&
ec1.src_port == ec2.src_port && ec1.dst_port == ec2.dst_port &&
ec1.uid == ec2.uid && ec1.proto == ec2.proto;
}
friend bool operator!=(const EncapsulatingConn& ec1,
const EncapsulatingConn& ec2)
{
return ! ( ec1 == ec2 );
}
protected:
IPAddr src_addr;
IPAddr dst_addr;
uint16 src_port;
uint16 dst_port;
TransportProto proto;
BifEnum::Tunnel::Type type;
uint64 uid;
};
/**
* Abstracts an arbitrary amount of nested tunneling.
*/
class EncapsulationStack {
public:
EncapsulationStack() : conns(0)
{}
EncapsulationStack(const EncapsulationStack& other)
{
if ( other.conns )
conns = new vector<EncapsulatingConn>(*(other.conns));
else
conns = 0;
}
EncapsulationStack& operator=(const EncapsulationStack& other)
{
if ( this == &other )
return *this;
delete conns;
if ( other.conns )
conns = new vector<EncapsulatingConn>(*(other.conns));
else
conns = 0;
return *this;
}
~EncapsulationStack() { delete conns; }
/**
* Add a new inner-most tunnel to the EncapsulationStack.
*
* @param c The new inner-most tunnel to append to the tunnel chain.
*/
void Add(const EncapsulatingConn& c)
{
if ( ! conns )
conns = new vector<EncapsulatingConn>();
conns->push_back(c);
}
/**
* Return how many nested tunnels are involved in a encapsulation, zero
* meaning no tunnels are present.
*/
size_t Depth() const
{
return conns ? conns->size() : 0;
}
/**
* Return the tunnel type of the inner-most tunnel.
*/
BifEnum::Tunnel::Type LastType() const
{
return conns ? (*conns)[conns->size()-1].Type() : BifEnum::Tunnel::NONE;
}
/**
* Get the value of type "EncapsulatingConnVector" represented by the
* entire encapsulation chain.
*/
VectorVal* GetVectorVal() const
{
VectorVal* vv = new VectorVal(
internal_type("EncapsulatingConnVector")->AsVectorType());
if ( conns )
{
for ( size_t i = 0; i < conns->size(); ++i )
vv->Assign(i, (*conns)[i].GetRecordVal(), 0);
}
return vv;
}
friend bool operator==(const EncapsulationStack& e1,
const EncapsulationStack& e2);
friend bool operator!=(const EncapsulationStack& e1,
const EncapsulationStack& e2)
{
return ! ( e1 == e2 );
}
protected:
vector<EncapsulatingConn>* conns;
};
#endif

View file

@ -1651,6 +1651,7 @@ int TableVal::RemoveFrom(Val* val) const
while ( (v = tbl->NextEntry(k, c)) ) while ( (v = tbl->NextEntry(k, c)) )
{ {
Val* index = RecoverIndex(k); Val* index = RecoverIndex(k);
Unref(index); Unref(index);
Unref(t->Delete(k)); Unref(t->Delete(k));
delete k; delete k;

89
src/ayiya-analyzer.pac Normal file
View file

@ -0,0 +1,89 @@
connection AYIYA_Conn(bro_analyzer: BroAnalyzer)
{
upflow = AYIYA_Flow;
downflow = AYIYA_Flow;
};
flow AYIYA_Flow
{
datagram = PDU withcontext(connection, this);
function process_ayiya(pdu: PDU): bool
%{
Connection *c = connection()->bro_analyzer()->Conn();
const EncapsulationStack* e = c->GetEncapsulation();
if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
{
reporter->Weird(c, "tunnel_depth");
return false;
}
if ( ${pdu.op} != 1 )
{
// 1 is the "forward" command.
return false;
}
if ( ${pdu.next_header} != IPPROTO_IPV6 &&
${pdu.next_header} != IPPROTO_IPV4 )
{
reporter->Weird(c, "ayiya_tunnel_non_ip");
return false;
}
if ( ${pdu.packet}.length() < (int)sizeof(struct ip) )
{
connection()->bro_analyzer()->ProtocolViolation(
"Truncated AYIYA", (const char*) ${pdu.packet}.data(),
${pdu.packet}.length());
return false;
}
const struct ip* ip = (const struct ip*) ${pdu.packet}.data();
if ( ( ${pdu.next_header} == IPPROTO_IPV6 && ip->ip_v != 6 ) ||
( ${pdu.next_header} == IPPROTO_IPV4 && ip->ip_v != 4) )
{
connection()->bro_analyzer()->ProtocolViolation(
"AYIYA next header mismatch", (const char*)${pdu.packet}.data(),
${pdu.packet}.length());
return false;
}
IP_Hdr* inner = 0;
int result = sessions->ParseIPPacket(${pdu.packet}.length(),
${pdu.packet}.data(), ${pdu.next_header}, inner);
if ( result == 0 )
connection()->bro_analyzer()->ProtocolConfirmation();
else if ( result < 0 )
connection()->bro_analyzer()->ProtocolViolation(
"Truncated AYIYA", (const char*) ${pdu.packet}.data(),
${pdu.packet}.length());
else
connection()->bro_analyzer()->ProtocolViolation(
"AYIYA payload length", (const char*) ${pdu.packet}.data(),
${pdu.packet}.length());
if ( result != 0 )
{
delete inner;
return false;
}
EncapsulatingConn ec(c, BifEnum::Tunnel::AYIYA);
sessions->DoNextInnerPacket(network_time(), 0, inner, e, ec);
return (result == 0) ? true : false;
%}
};
refine typeattr PDU += &let {
proc_ayiya = $context.flow.process_ayiya(this);
};

16
src/ayiya-protocol.pac Normal file
View file

@ -0,0 +1,16 @@
type PDU = record {
identity_byte: uint8;
signature_byte: uint8;
auth_and_op: uint8;
next_header: uint8;
epoch: uint32;
identity: bytestring &length=identity_len;
signature: bytestring &length=signature_len;
packet: bytestring &restofdata;
} &let {
identity_len = (1 << (identity_byte >> 4));
signature_len = (signature_byte >> 4) * 4;
auth = auth_and_op >> 4;
op = auth_and_op & 0xF;
} &byteorder = littleendian;

10
src/ayiya.pac Normal file
View file

@ -0,0 +1,10 @@
%include binpac.pac
%include bro.pac
analyzer AYIYA withcontext {
connection: AYIYA_Conn;
flow: AYIYA_Flow;
};
%include ayiya-protocol.pac
%include ayiya-analyzer.pac

View file

@ -972,12 +972,12 @@ function sha256_hash_finish%(index: any%): string
## ##
## .. note:: ## .. note::
## ##
## This function is a wrapper about the function ``rand`` provided by ## This function is a wrapper about the function ``random``
## the OS. ## provided by the OS.
function rand%(max: count%): count function rand%(max: count%): count
%{ %{
int result; int result;
result = bro_uint_t(double(max) * double(rand()) / (RAND_MAX + 1.0)); result = bro_uint_t(double(max) * double(bro_random()) / (RAND_MAX + 1.0));
return new Val(result, TYPE_COUNT); return new Val(result, TYPE_COUNT);
%} %}
@ -989,11 +989,11 @@ function rand%(max: count%): count
## ##
## .. note:: ## .. note::
## ##
## This function is a wrapper about the function ``srand`` provided ## This function is a wrapper about the function ``srandom``
## by the OS. ## provided by the OS.
function srand%(seed: count%): any function srand%(seed: count%): any
%{ %{
srand(seed); bro_srandom(seed);
return 0; return 0;
%} %}
@ -1700,7 +1700,7 @@ function fmt%(...%): string
return new StringVal(""); return new StringVal("");
} }
else if ( n >= @ARGC@ ) else if ( n >= @ARGC@ )
{ {
builtin_error("too few arguments for format", fmt_v); builtin_error("too few arguments for format", fmt_v);
return new StringVal(""); return new StringVal("");
@ -4814,7 +4814,9 @@ function calc_next_rotate%(i: interval%) : interval
%{ %{
const char* base_time = log_rotate_base_time ? const char* base_time = log_rotate_base_time ?
log_rotate_base_time->AsString()->CheckString() : 0; log_rotate_base_time->AsString()->CheckString() : 0;
return new Val(calc_next_rotate(i, base_time), TYPE_INTERVAL);
double base = parse_rotate_base_time(base_time);
return new Val(calc_next_rotate(network_time, i, base), TYPE_INTERVAL);
%} %}
## Returns the size of a given file. ## Returns the size of a given file.

View file

@ -4,7 +4,6 @@
const ignore_keep_alive_rexmit: bool; const ignore_keep_alive_rexmit: bool;
const skip_http_data: bool; const skip_http_data: bool;
const parse_udp_tunnels: bool;
const use_conn_size_analyzer: bool; const use_conn_size_analyzer: bool;
const report_gaps_for_partial: bool; const report_gaps_for_partial: bool;
@ -12,4 +11,11 @@ const NFS3::return_data: bool;
const NFS3::return_data_max: count; const NFS3::return_data_max: count;
const NFS3::return_data_first_only: bool; const NFS3::return_data_first_only: bool;
const Tunnel::max_depth: count;
const Tunnel::enable_ip: bool;
const Tunnel::enable_ayiya: bool;
const Tunnel::enable_teredo: bool;
const Tunnel::yielding_teredo_decapsulation: bool;
const Tunnel::ip_tunnel_timeout: interval;
const Threading::heartbeat_interval: interval; const Threading::heartbeat_interval: interval;

File diff suppressed because it is too large Load diff

View file

@ -71,15 +71,14 @@ declare(PDict, InputHash);
class Manager::Stream { class Manager::Stream {
public: public:
string name; string name;
string source; ReaderBackend::ReaderInfo info;
bool removed; bool removed;
ReaderMode mode;
StreamType stream_type; // to distinguish between event and table streams StreamType stream_type; // to distinguish between event and table streams
EnumVal* type; EnumVal* type;
ReaderFrontend* reader; ReaderFrontend* reader;
TableVal* config;
RecordVal* description; RecordVal* description;
@ -103,6 +102,9 @@ Manager::Stream::~Stream()
if ( description ) if ( description )
Unref(description); Unref(description);
if ( config )
Unref(config);
if ( reader ) if ( reader )
delete(reader); delete(reader);
} }
@ -255,10 +257,10 @@ ReaderBackend* Manager::CreateBackend(ReaderFrontend* frontend, bro_int_t type)
assert(ir->factory); assert(ir->factory);
frontend->SetTypeName(ir->name);
ReaderBackend* backend = (*ir->factory)(frontend); ReaderBackend* backend = (*ir->factory)(frontend);
assert(backend); assert(backend);
frontend->ty_name = ir->name;
return backend; return backend;
} }
@ -300,19 +302,20 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
Unref(sourceval); Unref(sourceval);
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal(); EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal();
Val* config = description->LookupWithDefault(rtype->FieldOffset("config"));
switch ( mode->InternalInt() ) switch ( mode->InternalInt() )
{ {
case 0: case 0:
info->mode = MODE_MANUAL; info->info.mode = MODE_MANUAL;
break; break;
case 1: case 1:
info->mode = MODE_REREAD; info->info.mode = MODE_REREAD;
break; break;
case 2: case 2:
info->mode = MODE_STREAM; info->info.mode = MODE_STREAM;
break; break;
default: default:
@ -324,10 +327,30 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
info->reader = reader_obj; info->reader = reader_obj;
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
info->name = name; info->name = name;
info->source = source; info->config = config->AsTableVal(); // ref'd by LookupWithDefault
info->info.source = source;
Ref(description); Ref(description);
info->description = description; info->description = description;
{
HashKey* k;
IterCookie* c = info->config->AsTable()->InitForIteration();
TableEntryVal* v;
while ( (v = info->config->AsTable()->NextEntry(k, c)) )
{
ListVal* index = info->config->RecoverIndex(k);
string key = index->Index(0)->AsString()->CheckString();
string value = v->Value()->AsString()->CheckString();
info->info.config.insert(std::make_pair(key, value));
Unref(index);
delete k;
}
}
DBG_LOG(DBG_INPUT, "Successfully created new input stream %s", DBG_LOG(DBG_INPUT, "Successfully created new input stream %s",
name.c_str()); name.c_str());
@ -451,7 +474,8 @@ bool Manager::CreateEventStream(RecordVal* fval)
Unref(want_record); // ref'd by lookupwithdefault Unref(want_record); // ref'd by lookupwithdefault
assert(stream->reader); assert(stream->reader);
stream->reader->Init(stream->source, stream->mode, stream->num_fields, logf );
stream->reader->Init(stream->info, stream->num_fields, logf );
readers[stream->reader] = stream; readers[stream->reader] = stream;
@ -628,7 +652,7 @@ bool Manager::CreateTableStream(RecordVal* fval)
assert(stream->reader); assert(stream->reader);
stream->reader->Init(stream->source, stream->mode, fieldsV.size(), fields ); stream->reader->Init(stream->info, fieldsV.size(), fields );
readers[stream->reader] = stream; readers[stream->reader] = stream;
@ -689,31 +713,39 @@ bool Manager::IsCompatibleType(BroType* t, bool atomic_only)
} }
bool Manager::RemoveStream(const string &name) bool Manager::RemoveStream(Stream *i)
{ {
Stream *i = FindStream(name);
if ( i == 0 ) if ( i == 0 )
return false; // not found return false; // not found
if ( i->removed ) if ( i->removed )
{ {
reporter->Error("Stream %s is already queued for removal. Ignoring remove.", name.c_str()); reporter->Warning("Stream %s is already queued for removal. Ignoring remove.", i->name.c_str());
return false; return true;
} }
i->removed = true; i->removed = true;
i->reader->Close(); i->reader->Close();
#ifdef DEBUG DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s",
DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s", i->name.c_str());
name.c_str());
#endif
return true; return true;
} }
bool Manager::RemoveStream(ReaderFrontend* frontend)
{
return RemoveStream(FindStream(frontend));
}
bool Manager::RemoveStream(const string &name)
{
return RemoveStream(FindStream(name));
}
bool Manager::RemoveStreamContinuation(ReaderFrontend* reader) bool Manager::RemoveStreamContinuation(ReaderFrontend* reader)
{ {
Stream *i = FindStream(reader); Stream *i = FindStream(reader);
@ -1200,7 +1232,7 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
#endif #endif
// Send event that the current update is indeed finished. // Send event that the current update is indeed finished.
SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->source.c_str())); SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->info.source.c_str()));
} }
void Manager::Put(ReaderFrontend* reader, Value* *vals) void Manager::Put(ReaderFrontend* reader, Value* *vals)

View file

@ -72,7 +72,7 @@ public:
/** /**
* Deletes an existing input stream. * Deletes an existing input stream.
* *
* @param id The enum value corresponding the input stream. * @param id The name of the input stream to be removed.
* *
* This method corresponds directly to the internal BiF defined in * This method corresponds directly to the internal BiF defined in
* input.bif, which just forwards here. * input.bif, which just forwards here.
@ -88,6 +88,7 @@ protected:
friend class SendEntryMessage; friend class SendEntryMessage;
friend class EndCurrentSendMessage; friend class EndCurrentSendMessage;
friend class ReaderClosedMessage; friend class ReaderClosedMessage;
friend class DisableMessage;
// For readers to write to input stream in direct mode (reporting // For readers to write to input stream in direct mode (reporting
// new/deleted values directly). Functions take ownership of // new/deleted values directly). Functions take ownership of
@ -118,12 +119,26 @@ protected:
// main thread. This makes sure all data that has ben queued for a // main thread. This makes sure all data that has ben queued for a
// stream is still received. // stream is still received.
bool RemoveStreamContinuation(ReaderFrontend* reader); bool RemoveStreamContinuation(ReaderFrontend* reader);
/**
* Deletes an existing input stream.
*
* @param frontend pointer to the frontend of the input stream to be removed.
*
* This method is used by the reader backends to remove a reader when it fails
* for some reason.
*/
bool RemoveStream(ReaderFrontend* frontend);
private: private:
class Stream; class Stream;
class TableStream; class TableStream;
class EventStream; class EventStream;
// Actual RemoveStream implementation -- the function's public and
// protected definitions are wrappers around this function.
bool RemoveStream(Stream* i);
bool CreateStream(Stream*, RecordVal* description); bool CreateStream(Stream*, RecordVal* description);
// SendEntry implementation for Table stream. // SendEntry implementation for Table stream.

View file

@ -113,6 +113,7 @@ public:
virtual bool Process() virtual bool Process()
{ {
Object()->SetDisable();
return input_mgr->RemoveStreamContinuation(Object()); return input_mgr->RemoveStreamContinuation(Object());
} }
@ -129,10 +130,17 @@ public:
virtual bool Process() virtual bool Process()
{ {
Object()->SetDisable(); Object()->SetDisable();
// And - because we do not need disabled objects any more -
// there is no way to re-enable them, so simply delete them.
// This avoids the problem of having to periodically check if
// there are any disabled readers out there. As soon as a
// reader disables itself, it deletes itself.
input_mgr->RemoveStream(Object());
return true; return true;
} }
}; };
using namespace logging;
ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread() ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread()
{ {
@ -176,18 +184,17 @@ void ReaderBackend::SendEntry(Value* *vals)
SendOut(new SendEntryMessage(frontend, vals)); SendOut(new SendEntryMessage(frontend, vals));
} }
bool ReaderBackend::Init(string arg_source, ReaderMode arg_mode, const int arg_num_fields, bool ReaderBackend::Init(const ReaderInfo& arg_info, const int arg_num_fields,
const threading::Field* const* arg_fields) const threading::Field* const* arg_fields)
{ {
source = arg_source; info = arg_info;
mode = arg_mode;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
SetName("InputReader/"+source); SetName("InputReader/"+info.source);
// disable if DoInit returns error. // disable if DoInit returns error.
int success = DoInit(arg_source, mode, arg_num_fields, arg_fields); int success = DoInit(arg_info, arg_num_fields, arg_fields);
if ( ! success ) if ( ! success )
{ {
@ -203,8 +210,7 @@ bool ReaderBackend::Init(string arg_source, ReaderMode arg_mode, const int arg_n
void ReaderBackend::Close() void ReaderBackend::Close()
{ {
DoClose(); DoClose();
disabled = true; disabled = true; // frontend disables itself when it gets the Close-message.
DisableFrontend();
SendOut(new ReaderClosedMessage(frontend)); SendOut(new ReaderClosedMessage(frontend));
if ( fields != 0 ) if ( fields != 0 )

View file

@ -11,21 +11,28 @@
namespace input { namespace input {
/** /**
* The modes a reader can be in. * The modes a reader can be in.
*/ */
enum ReaderMode { enum ReaderMode {
/** /**
* TODO Bernhard. * Manual refresh reader mode. The reader will read the file once,
* and send all read data back to the manager. After that, no automatic
* refresh should happen. Manual refreshes can be triggered from the
* scripting layer using force_update.
*/ */
MODE_MANUAL, MODE_MANUAL,
/** /**
* TODO Bernhard. * Automatic rereading mode. The reader should monitor the
* data source for changes continually. When the data source changes,
* either the whole file has to be resent using the SendEntry/EndCurrentSend functions.
*/ */
MODE_REREAD, MODE_REREAD,
/** /**
* TODO Bernhard. * Streaming reading mode. The reader should monitor the data source
* for new appended data. When new data is appended is has to be sent
* using the Put api functions.
*/ */
MODE_STREAM MODE_STREAM
}; };
@ -58,23 +65,48 @@ public:
*/ */
virtual ~ReaderBackend(); virtual ~ReaderBackend();
/**
* A struct passing information to the reader at initialization time.
*/
struct ReaderInfo
{
typedef std::map<string, string> config_map;
/**
* A string left to the interpretation of the reader
* implementation; it corresponds to the value configured on
* the script-level for the logging filter.
*/
string source;
/**
* A map of key/value pairs corresponding to the relevant
* filter's "config" table.
*/
config_map config;
/**
* The opening mode for the input source.
*/
ReaderMode mode;
};
/** /**
* One-time initialization of the reader to define the input source. * One-time initialization of the reader to define the input source.
* *
* @param source A string left to the interpretation of the * @param @param info Meta information for the writer.
* reader implementation; it corresponds to the value configured on
* the script-level for the input stream.
*
* @param mode The opening mode for the input source.
* *
* @param num_fields Number of fields contained in \a fields. * @param num_fields Number of fields contained in \a fields.
* *
* @param fields The types and names of the fields to be retrieved * @param fields The types and names of the fields to be retrieved
* from the input source. * from the input source.
* *
* @param config A string map containing additional configuration options
* for the reader.
*
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Init(string source, ReaderMode mode, int num_fields, const threading::Field* const* fields); bool Init(const ReaderInfo& info, int num_fields, const threading::Field* const* fields);
/** /**
* Finishes reading from this input stream in a regular fashion. Must * Finishes reading from this input stream in a regular fashion. Must
@ -102,6 +134,22 @@ public:
*/ */
void DisableFrontend(); void DisableFrontend();
/**
* Returns the log fields as passed into the constructor.
*/
const threading::Field* const * Fields() const { return fields; }
/**
* Returns the additional reader information into the constructor.
*/
const ReaderInfo& Info() const { return info; }
/**
* Returns the number of log fields as passed into the constructor.
*/
int NumFields() const { return num_fields; }
protected: protected:
// Methods that have to be overwritten by the individual readers // Methods that have to be overwritten by the individual readers
@ -123,7 +171,7 @@ protected:
* provides accessor methods to get them later, and they are passed * provides accessor methods to get them later, and they are passed
* in here only for convinience. * in here only for convinience.
*/ */
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields) = 0; virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields) = 0;
/** /**
* Reader-specific method implementing input finalization at * Reader-specific method implementing input finalization at
@ -152,26 +200,6 @@ protected:
*/ */
virtual bool DoUpdate() = 0; virtual bool DoUpdate() = 0;
/**
* Returns the input source as passed into Init()/.
*/
const string Source() const { return source; }
/**
* Returns the reader mode as passed into Init().
*/
const ReaderMode Mode() const { return mode; }
/**
* Returns the number of log fields as passed into Init().
*/
unsigned int NumFields() const { return num_fields; }
/**
* Returns the log fields as passed into Init().
*/
const threading::Field* const * Fields() const { return fields; }
/** /**
* Method allowing a reader to send a specified Bro event. Vals must * Method allowing a reader to send a specified Bro event. Vals must
* match the values expected by the bro event. * match the values expected by the bro event.
@ -272,8 +300,7 @@ private:
// from this class, it's running in a different thread! // from this class, it's running in a different thread!
ReaderFrontend* frontend; ReaderFrontend* frontend;
string source; ReaderInfo info;
ReaderMode mode;
unsigned int num_fields; unsigned int num_fields;
const threading::Field* const * fields; // raw mapping const threading::Field* const * fields; // raw mapping

View file

@ -6,27 +6,23 @@
#include "threading/MsgThread.h" #include "threading/MsgThread.h"
// FIXME: cleanup of disabled inputreaders is missing. we need this, because
// stuff can e.g. fail in init and might never be removed afterwards.
namespace input { namespace input {
class InitMessage : public threading::InputMessage<ReaderBackend> class InitMessage : public threading::InputMessage<ReaderBackend>
{ {
public: public:
InitMessage(ReaderBackend* backend, const string source, ReaderMode mode, InitMessage(ReaderBackend* backend, const ReaderBackend::ReaderInfo& info,
const int num_fields, const threading::Field* const* fields) const int num_fields, const threading::Field* const* fields)
: threading::InputMessage<ReaderBackend>("Init", backend), : threading::InputMessage<ReaderBackend>("Init", backend),
source(source), mode(mode), num_fields(num_fields), fields(fields) { } info(info), num_fields(num_fields), fields(fields) { }
virtual bool Process() virtual bool Process()
{ {
return Object()->Init(source, mode, num_fields, fields); return Object()->Init(info, num_fields, fields);
} }
private: private:
const string source; const ReaderBackend::ReaderInfo info;
const ReaderMode mode;
const int num_fields; const int num_fields;
const threading::Field* const* fields; const threading::Field* const* fields;
}; };
@ -66,8 +62,8 @@ ReaderFrontend::~ReaderFrontend()
{ {
} }
void ReaderFrontend::Init(string arg_source, ReaderMode mode, const int num_fields, void ReaderFrontend::Init(const ReaderBackend::ReaderInfo& arg_info, const int arg_num_fields,
const threading::Field* const* fields) const threading::Field* const* arg_fields)
{ {
if ( disabled ) if ( disabled )
return; return;
@ -75,10 +71,12 @@ void ReaderFrontend::Init(string arg_source, ReaderMode mode, const int num_fiel
if ( initialized ) if ( initialized )
reporter->InternalError("reader initialize twice"); reporter->InternalError("reader initialize twice");
source = arg_source; info = arg_info;
num_fields = arg_num_fields;
fields = arg_fields;
initialized = true; initialized = true;
backend->SendIn(new InitMessage(backend, arg_source, mode, num_fields, fields)); backend->SendIn(new InitMessage(backend, info, num_fields, fields));
} }
void ReaderFrontend::Update() void ReaderFrontend::Update()
@ -106,15 +104,16 @@ void ReaderFrontend::Close()
return; return;
} }
disabled = true;
backend->SendIn(new CloseMessage(backend)); backend->SendIn(new CloseMessage(backend));
} }
string ReaderFrontend::Name() const string ReaderFrontend::Name() const
{ {
if ( source.size() ) if ( ! info.source.size() )
return ty_name; return ty_name;
return ty_name + "/" + source; return ty_name + "/" + info.source;
} }
} }

View file

@ -52,7 +52,7 @@ public:
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*/ */
void Init(string arg_source, ReaderMode mode, const int arg_num_fields, const threading::Field* const* fields); void Init(const ReaderBackend::ReaderInfo& info, const int arg_num_fields, const threading::Field* const* fields);
/** /**
* Force an update of the current input source. Actual action depends * Force an update of the current input source. Actual action depends
@ -102,22 +102,39 @@ public:
*/ */
string Name() const; string Name() const;
protected: /**
friend class Manager; * Returns the additional reader information passed into the constructor.
*/
const ReaderBackend::ReaderInfo& Info() const { return info; }
/** /**
* Returns the source as passed into the constructor. * Returns the number of log fields as passed into the constructor.
*/ */
const string& Source() const { return source; }; int NumFields() const { return num_fields; }
/**
* Returns the log fields as passed into the constructor.
*/
const threading::Field* const * Fields() const { return fields; }
protected:
friend class Manager;
/** /**
* Returns the name of the backend's type. * Returns the name of the backend's type.
*/ */
const string& TypeName() const { return ty_name; } const string& TypeName() const { return ty_name; }
/**
* Sets the name of the backend's type.
*/
void SetTypeName(const string& name) { ty_name = name; }
private: private:
ReaderBackend* backend; // The backend we have instanatiated. ReaderBackend* backend; // The backend we have instanatiated.
string source; ReaderBackend::ReaderInfo info; // Meta information as passed to Init().
const threading::Field* const* fields; // The input fields.
int num_fields; // Information as passed to Init().
string ty_name; // Backend type, set by manager. string ty_name; // Backend type, set by manager.
bool disabled; // True if disabled. bool disabled; // True if disabled.
bool initialized; // True if initialized. bool initialized; // True if initialized.

View file

@ -83,14 +83,14 @@ void Ascii::DoClose()
} }
} }
bool Ascii::DoInit(string path, ReaderMode mode, int num_fields, const Field* const* fields) bool Ascii::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields)
{ {
mtime = 0; mtime = 0;
file = new ifstream(path.c_str()); file = new ifstream(info.source.c_str());
if ( ! file->is_open() ) if ( ! file->is_open() )
{ {
Error(Fmt("Init: cannot open %s", path.c_str())); Error(Fmt("Init: cannot open %s", info.source.c_str()));
delete(file); delete(file);
file = 0; file = 0;
return false; return false;
@ -98,7 +98,7 @@ bool Ascii::DoInit(string path, ReaderMode mode, int num_fields, const Field* co
if ( ReadHeader(false) == false ) if ( ReadHeader(false) == false )
{ {
Error(Fmt("Init: cannot open %s; headers are incorrect", path.c_str())); Error(Fmt("Init: cannot open %s; headers are incorrect", info.source.c_str()));
file->close(); file->close();
delete(file); delete(file);
file = 0; file = 0;
@ -147,7 +147,7 @@ bool Ascii::ReadHeader(bool useCached)
//printf("Updating fields from description %s\n", line.c_str()); //printf("Updating fields from description %s\n", line.c_str());
columnMap.clear(); columnMap.clear();
for ( unsigned int i = 0; i < NumFields(); i++ ) for ( int i = 0; i < NumFields(); i++ )
{ {
const Field* field = Fields()[i]; const Field* field = Fields()[i];
@ -164,7 +164,7 @@ bool Ascii::ReadHeader(bool useCached)
} }
Error(Fmt("Did not find requested field %s in input data file %s.", Error(Fmt("Did not find requested field %s in input data file %s.",
field->name.c_str(), Source().c_str())); field->name.c_str(), Info().source.c_str()));
return false; return false;
} }
@ -362,14 +362,14 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
// read the entire file and send appropriate thingies back to InputMgr // read the entire file and send appropriate thingies back to InputMgr
bool Ascii::DoUpdate() bool Ascii::DoUpdate()
{ {
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_REREAD: case MODE_REREAD:
{ {
// check if the file has changed // check if the file has changed
struct stat sb; struct stat sb;
if ( stat(Source().c_str(), &sb) == -1 ) if ( stat(Info().source.c_str(), &sb) == -1 )
{ {
Error(Fmt("Could not get stat for %s", Source().c_str())); Error(Fmt("Could not get stat for %s", Info().source.c_str()));
return false; return false;
} }
@ -389,7 +389,7 @@ bool Ascii::DoUpdate()
// - this is not that bad) // - this is not that bad)
if ( file && file->is_open() ) if ( file && file->is_open() )
{ {
if ( Mode() == MODE_STREAM ) if ( Info().mode == MODE_STREAM )
{ {
file->clear(); // remove end of file evil bits file->clear(); // remove end of file evil bits
if ( !ReadHeader(true) ) if ( !ReadHeader(true) )
@ -403,10 +403,10 @@ bool Ascii::DoUpdate()
file = 0; file = 0;
} }
file = new ifstream(Source().c_str()); file = new ifstream(Info().source.c_str());
if ( ! file->is_open() ) if ( ! file->is_open() )
{ {
Error(Fmt("cannot open %s", Source().c_str())); Error(Fmt("cannot open %s", Info().source.c_str()));
return false; return false;
} }
@ -490,15 +490,15 @@ bool Ascii::DoUpdate()
} }
//printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields); //printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields);
assert ( (unsigned int) fpos == NumFields() ); assert ( fpos == NumFields() );
if ( Mode() == MODE_STREAM ) if ( Info().mode == MODE_STREAM )
Put(fields); Put(fields);
else else
SendEntry(fields); SendEntry(fields);
} }
if ( Mode () != MODE_STREAM ) if ( Info().mode != MODE_STREAM )
EndCurrentSend(); EndCurrentSend();
return true; return true;
@ -508,7 +508,7 @@ bool Ascii::DoHeartbeat(double network_time, double current_time)
{ {
ReaderBackend::DoHeartbeat(network_time, current_time); ReaderBackend::DoHeartbeat(network_time, current_time);
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_MANUAL: case MODE_MANUAL:
// yay, we do nothing :) // yay, we do nothing :)
break; break;

View file

@ -38,7 +38,7 @@ public:
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); } static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); }
protected: protected:
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields); virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields);
virtual void DoClose(); virtual void DoClose();
virtual bool DoUpdate(); virtual bool DoUpdate();
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time);

View file

@ -36,9 +36,9 @@ void Benchmark::DoClose()
{ {
} }
bool Benchmark::DoInit(string path, ReaderMode mode, int num_fields, const Field* const* fields) bool Benchmark::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields)
{ {
num_lines = atoi(path.c_str()); num_lines = atoi(info.source.c_str());
if ( autospread != 0.0 ) if ( autospread != 0.0 )
autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) ); autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) );
@ -59,7 +59,7 @@ string Benchmark::RandomString(const int len)
"abcdefghijklmnopqrstuvwxyz"; "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < len; ++i) for (int i = 0; i < len; ++i)
s[i] = values[rand() / (RAND_MAX / sizeof(values))]; s[i] = values[random() / (RAND_MAX / sizeof(values))];
return s; return s;
} }
@ -80,10 +80,10 @@ bool Benchmark::DoUpdate()
for ( int i = 0; i < linestosend; i++ ) for ( int i = 0; i < linestosend; i++ )
{ {
Value** field = new Value*[NumFields()]; Value** field = new Value*[NumFields()];
for (unsigned int j = 0; j < NumFields(); j++ ) for (int j = 0; j < NumFields(); j++ )
field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype); field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype);
if ( Mode() == MODE_STREAM ) if ( Info().mode == MODE_STREAM )
// do not do tracking, spread out elements over the second that we have... // do not do tracking, spread out elements over the second that we have...
Put(field); Put(field);
else else
@ -109,7 +109,7 @@ bool Benchmark::DoUpdate()
} }
if ( Mode() != MODE_STREAM ) if ( Info().mode != MODE_STREAM )
EndCurrentSend(); EndCurrentSend();
return true; return true;
@ -134,7 +134,7 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
break; break;
case TYPE_INT: case TYPE_INT:
val->val.int_val = rand(); val->val.int_val = random();
break; break;
case TYPE_TIME: case TYPE_TIME:
@ -148,11 +148,11 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
case TYPE_COUNT: case TYPE_COUNT:
case TYPE_COUNTER: case TYPE_COUNTER:
val->val.uint_val = rand(); val->val.uint_val = random();
break; break;
case TYPE_PORT: case TYPE_PORT:
val->val.port_val.port = rand() / (RAND_MAX / 60000); val->val.port_val.port = random() / (RAND_MAX / 60000);
val->val.port_val.proto = TRANSPORT_UNKNOWN; val->val.port_val.proto = TRANSPORT_UNKNOWN;
break; break;
@ -175,7 +175,7 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
// Then - common stuff // Then - common stuff
{ {
// how many entries do we have... // how many entries do we have...
unsigned int length = rand() / (RAND_MAX / 15); unsigned int length = random() / (RAND_MAX / 15);
Value** lvals = new Value* [length]; Value** lvals = new Value* [length];
@ -227,7 +227,7 @@ bool Benchmark::DoHeartbeat(double network_time, double current_time)
num_lines += add; num_lines += add;
heartbeatstarttime = CurrTime(); heartbeatstarttime = CurrTime();
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_MANUAL: case MODE_MANUAL:
// yay, we do nothing :) // yay, we do nothing :)
break; break;

View file

@ -18,7 +18,7 @@ public:
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); } static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); }
protected: protected:
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields); virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields);
virtual void DoClose(); virtual void DoClose();
virtual bool DoUpdate(); virtual bool DoUpdate();
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time);

View file

@ -66,7 +66,7 @@ bool Raw::OpenInput()
// This is defined in input/fdstream.h // This is defined in input/fdstream.h
in = new boost::fdistream(fileno(file)); in = new boost::fdistream(fileno(file));
if ( execute && Mode() == MODE_STREAM ) if ( execute && Info().mode == MODE_STREAM )
fcntl(fileno(file), F_SETFL, O_NONBLOCK); fcntl(fileno(file), F_SETFL, O_NONBLOCK);
return true; return true;
@ -79,6 +79,9 @@ bool Raw::CloseInput()
InternalError(Fmt("Trying to close closed file for stream %s", fname.c_str())); InternalError(Fmt("Trying to close closed file for stream %s", fname.c_str()));
return false; return false;
} }
#ifdef DEBUG
Debug(DBG_INPUT, "Raw reader starting close");
#endif
delete in; delete in;
@ -90,18 +93,22 @@ bool Raw::CloseInput()
in = NULL; in = NULL;
file = NULL; file = NULL;
#ifdef DEBUG
Debug(DBG_INPUT, "Raw reader finished close");
#endif
return true; return true;
} }
bool Raw::DoInit(string path, ReaderMode mode, int num_fields, const Field* const* fields) bool Raw::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields)
{ {
fname = path; fname = info.source;
mtime = 0; mtime = 0;
execute = false; execute = false;
firstrun = true; firstrun = true;
bool result; bool result;
if ( path.length() == 0 ) if ( info.source.length() == 0 )
{ {
Error("No source path provided"); Error("No source path provided");
return false; return false;
@ -122,16 +129,16 @@ bool Raw::DoInit(string path, ReaderMode mode, int num_fields, const Field* cons
} }
// do Initialization // do Initialization
char last = path[path.length()-1]; char last = info.source[info.source.length()-1];
if ( last == '|' ) if ( last == '|' )
{ {
execute = true; execute = true;
fname = path.substr(0, fname.length() - 1); fname = info.source.substr(0, fname.length() - 1);
if ( (mode != MODE_MANUAL) && (mode != MODE_STREAM) ) if ( (info.mode != MODE_MANUAL) )
{ {
Error(Fmt("Unsupported read mode %d for source %s in execution mode", Error(Fmt("Unsupported read mode %d for source %s in execution mode",
mode, fname.c_str())); info.mode, fname.c_str()));
return false; return false;
} }
@ -180,7 +187,7 @@ bool Raw::DoUpdate()
else else
{ {
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_REREAD: case MODE_REREAD:
{ {
// check if the file has changed // check if the file has changed
@ -203,7 +210,7 @@ bool Raw::DoUpdate()
case MODE_MANUAL: case MODE_MANUAL:
case MODE_STREAM: case MODE_STREAM:
if ( Mode() == MODE_STREAM && file != NULL && in != NULL ) if ( Info().mode == MODE_STREAM && file != NULL && in != NULL )
{ {
//fpurge(file); //fpurge(file);
in->clear(); // remove end of file evil bits in->clear(); // remove end of file evil bits
@ -247,15 +254,21 @@ bool Raw::DoHeartbeat(double network_time, double current_time)
{ {
ReaderBackend::DoHeartbeat(network_time, current_time); ReaderBackend::DoHeartbeat(network_time, current_time);
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_MANUAL: case MODE_MANUAL:
// yay, we do nothing :) // yay, we do nothing :)
break; break;
case MODE_REREAD: case MODE_REREAD:
case MODE_STREAM: case MODE_STREAM:
#ifdef DEBUG
Debug(DBG_INPUT, "Starting Heartbeat update");
#endif
Update(); // call update and not DoUpdate, because update Update(); // call update and not DoUpdate, because update
// checks disabled. // checks disabled.
#ifdef DEBUG
Debug(DBG_INPUT, "Finished with heartbeat update");
#endif
break; break;
default: default:
assert(false); assert(false);

View file

@ -22,7 +22,7 @@ public:
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); } static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); }
protected: protected:
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields); virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields);
virtual void DoClose(); virtual void DoClose();
virtual bool DoUpdate(); virtual bool DoUpdate();
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time);

View file

@ -94,3 +94,9 @@ const type_prefix: string;
const max_batch_size: count; const max_batch_size: count;
const max_batch_interval: interval; const max_batch_interval: interval;
const max_byte_size: count; const max_byte_size: count;
# Options for the None writer.
module LogNone;
const debug: bool;

View file

@ -60,6 +60,7 @@ struct Manager::Filter {
string path; string path;
Val* path_val; Val* path_val;
EnumVal* writer; EnumVal* writer;
TableVal* config;
bool local; bool local;
bool remote; bool remote;
double interval; double interval;
@ -83,6 +84,7 @@ struct Manager::WriterInfo {
double interval; double interval;
Func* postprocessor; Func* postprocessor;
WriterFrontend* writer; WriterFrontend* writer;
WriterBackend::WriterInfo info;
}; };
struct Manager::Stream { struct Manager::Stream {
@ -200,10 +202,10 @@ WriterBackend* Manager::CreateBackend(WriterFrontend* frontend, bro_int_t type)
assert(ld->factory); assert(ld->factory);
frontend->ty_name = ld->name;
WriterBackend* backend = (*ld->factory)(frontend); WriterBackend* backend = (*ld->factory)(frontend);
assert(backend); assert(backend);
frontend->ty_name = ld->name;
return backend; return backend;
} }
@ -527,6 +529,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
Val* log_remote = fval->LookupWithDefault(rtype->FieldOffset("log_remote")); Val* log_remote = fval->LookupWithDefault(rtype->FieldOffset("log_remote"));
Val* interv = fval->LookupWithDefault(rtype->FieldOffset("interv")); Val* interv = fval->LookupWithDefault(rtype->FieldOffset("interv"));
Val* postprocessor = fval->LookupWithDefault(rtype->FieldOffset("postprocessor")); Val* postprocessor = fval->LookupWithDefault(rtype->FieldOffset("postprocessor"));
Val* config = fval->LookupWithDefault(rtype->FieldOffset("config"));
Filter* filter = new Filter; Filter* filter = new Filter;
filter->name = name->AsString()->CheckString(); filter->name = name->AsString()->CheckString();
@ -538,6 +541,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
filter->remote = log_remote->AsBool(); filter->remote = log_remote->AsBool();
filter->interval = interv->AsInterval(); filter->interval = interv->AsInterval();
filter->postprocessor = postprocessor ? postprocessor->AsFunc() : 0; filter->postprocessor = postprocessor ? postprocessor->AsFunc() : 0;
filter->config = config->Ref()->AsTableVal();
Unref(name); Unref(name);
Unref(pred); Unref(pred);
@ -546,6 +550,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
Unref(log_remote); Unref(log_remote);
Unref(interv); Unref(interv);
Unref(postprocessor); Unref(postprocessor);
Unref(config);
// Build the list of fields that the filter wants included, including // Build the list of fields that the filter wants included, including
// potentially rolling out fields. // potentially rolling out fields.
@ -773,8 +778,27 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
for ( int j = 0; j < filter->num_fields; ++j ) for ( int j = 0; j < filter->num_fields; ++j )
arg_fields[j] = new threading::Field(*filter->fields[j]); arg_fields[j] = new threading::Field(*filter->fields[j]);
WriterBackend::WriterInfo info;
info.path = path;
HashKey* k;
IterCookie* c = filter->config->AsTable()->InitForIteration();
TableEntryVal* v;
while ( (v = filter->config->AsTable()->NextEntry(k, c)) )
{
ListVal* index = filter->config->RecoverIndex(k);
string key = index->Index(0)->AsString()->CheckString();
string value = v->Value()->AsString()->CheckString();
info.config.insert(std::make_pair(key, value));
Unref(index);
delete k;
}
// CreateWriter() will set the other fields in info.
writer = CreateWriter(stream->id, filter->writer, writer = CreateWriter(stream->id, filter->writer,
path, filter->num_fields, info, filter->num_fields,
arg_fields, filter->local, filter->remote); arg_fields, filter->local, filter->remote);
if ( ! writer ) if ( ! writer )
@ -782,7 +806,6 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
Unref(columns); Unref(columns);
return false; return false;
} }
} }
// Alright, can do the write now. // Alright, can do the write now.
@ -962,7 +985,7 @@ threading::Value** Manager::RecordToFilterVals(Stream* stream, Filter* filter,
return vals; return vals;
} }
WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path, WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, const WriterBackend::WriterInfo& info,
int num_fields, const threading::Field* const* fields, bool local, bool remote) int num_fields, const threading::Field* const* fields, bool local, bool remote)
{ {
Stream* stream = FindStream(id); Stream* stream = FindStream(id);
@ -972,25 +995,21 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
return false; return false;
Stream::WriterMap::iterator w = Stream::WriterMap::iterator w =
stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), path)); stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), info.path));
if ( w != stream->writers.end() ) if ( w != stream->writers.end() )
// If we already have a writer for this. That's fine, we just // If we already have a writer for this. That's fine, we just
// return it. // return it.
return w->second->writer; return w->second->writer;
WriterFrontend* writer_obj = new WriterFrontend(id, writer, local, remote);
assert(writer_obj);
writer_obj->Init(path, num_fields, fields);
WriterInfo* winfo = new WriterInfo; WriterInfo* winfo = new WriterInfo;
winfo->type = writer->Ref()->AsEnumVal(); winfo->type = writer->Ref()->AsEnumVal();
winfo->writer = writer_obj; winfo->writer = 0;
winfo->open_time = network_time; winfo->open_time = network_time;
winfo->rotation_timer = 0; winfo->rotation_timer = 0;
winfo->interval = 0; winfo->interval = 0;
winfo->postprocessor = 0; winfo->postprocessor = 0;
winfo->info = info;
// Search for a corresponding filter for the writer/path pair and use its // Search for a corresponding filter for the writer/path pair and use its
// rotation settings. If no matching filter is found, fall back on // rotation settings. If no matching filter is found, fall back on
@ -1002,7 +1021,7 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
{ {
Filter* f = *it; Filter* f = *it;
if ( f->writer->AsEnum() == writer->AsEnum() && if ( f->writer->AsEnum() == writer->AsEnum() &&
f->path == winfo->writer->Path() ) f->path == info.path )
{ {
found_filter_match = true; found_filter_match = true;
winfo->interval = f->interval; winfo->interval = f->interval;
@ -1018,13 +1037,24 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
winfo->interval = id->ID_Val()->AsInterval(); winfo->interval = id->ID_Val()->AsInterval();
} }
InstallRotationTimer(winfo);
stream->writers.insert( stream->writers.insert(
Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), path), Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), info.path),
winfo)); winfo));
return writer_obj; // Still need to set the WriterInfo's rotation parameters, which we
// computed above.
const char* base_time = log_rotate_base_time ?
log_rotate_base_time->AsString()->CheckString() : 0;
winfo->info.rotation_interval = winfo->interval;
winfo->info.rotation_base = parse_rotate_base_time(base_time);
winfo->writer = new WriterFrontend(id, writer, local, remote);
winfo->writer->Init(winfo->info, num_fields, fields);
InstallRotationTimer(winfo);
return winfo->writer;
} }
void Manager::DeleteVals(int num_fields, threading::Value** vals) void Manager::DeleteVals(int num_fields, threading::Value** vals)
@ -1102,7 +1132,7 @@ void Manager::SendAllWritersTo(RemoteSerializer::PeerID peer)
EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer); EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer);
remote_serializer->SendLogCreateWriter(peer, (*s)->id, remote_serializer->SendLogCreateWriter(peer, (*s)->id,
&writer_val, &writer_val,
i->first.second, i->second->info,
writer->NumFields(), writer->NumFields(),
writer->Fields()); writer->Fields());
} }
@ -1227,8 +1257,9 @@ void Manager::InstallRotationTimer(WriterInfo* winfo)
const char* base_time = log_rotate_base_time ? const char* base_time = log_rotate_base_time ?
log_rotate_base_time->AsString()->CheckString() : 0; log_rotate_base_time->AsString()->CheckString() : 0;
double base = parse_rotate_base_time(base_time);
double delta_t = double delta_t =
calc_next_rotate(rotation_interval, base_time); calc_next_rotate(network_time, rotation_interval, base);
winfo->rotation_timer = winfo->rotation_timer =
new RotationTimer(network_time + delta_t, winfo, true); new RotationTimer(network_time + delta_t, winfo, true);
@ -1237,14 +1268,14 @@ void Manager::InstallRotationTimer(WriterInfo* winfo)
timer_mgr->Add(winfo->rotation_timer); timer_mgr->Add(winfo->rotation_timer);
DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f", DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f",
winfo->writer->Path().c_str(), winfo->rotation_timer->Time()); winfo->writer->Name().c_str(), winfo->rotation_timer->Time());
} }
} }
void Manager::Rotate(WriterInfo* winfo) void Manager::Rotate(WriterInfo* winfo)
{ {
DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f", DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f",
winfo->writer->Path().c_str(), network_time); winfo->writer->Name().c_str(), network_time);
// Build a temporary path for the writer to move the file to. // Build a temporary path for the writer to move the file to.
struct tm tm; struct tm tm;
@ -1255,7 +1286,7 @@ void Manager::Rotate(WriterInfo* winfo)
localtime_r(&teatime, &tm); localtime_r(&teatime, &tm);
strftime(buf, sizeof(buf), date_fmt, &tm); strftime(buf, sizeof(buf), date_fmt, &tm);
string tmp = string(fmt("%s-%s", winfo->writer->Path().c_str(), buf)); string tmp = string(fmt("%s-%s", winfo->writer->Info().path.c_str(), buf));
// Trigger the rotation. // Trigger the rotation.
winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating); winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating);
@ -1273,7 +1304,7 @@ bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string o
return true; return true;
DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s", DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s",
writer->Path().c_str(), network_time, new_name.c_str()); writer->Name().c_str(), network_time, new_name.c_str());
WriterInfo* winfo = FindWriter(writer); WriterInfo* winfo = FindWriter(writer);
if ( ! winfo ) if ( ! winfo )
@ -1283,7 +1314,7 @@ bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string o
RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo); RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo);
info->Assign(0, winfo->type->Ref()); info->Assign(0, winfo->type->Ref());
info->Assign(1, new StringVal(new_name.c_str())); info->Assign(1, new StringVal(new_name.c_str()));
info->Assign(2, new StringVal(winfo->writer->Path().c_str())); info->Assign(2, new StringVal(winfo->writer->Info().path.c_str()));
info->Assign(3, new Val(open, TYPE_TIME)); info->Assign(3, new Val(open, TYPE_TIME));
info->Assign(4, new Val(close, TYPE_TIME)); info->Assign(4, new Val(close, TYPE_TIME));
info->Assign(5, new Val(terminating, TYPE_BOOL)); info->Assign(5, new Val(terminating, TYPE_BOOL));

View file

@ -9,13 +9,14 @@
#include "../EventHandler.h" #include "../EventHandler.h"
#include "../RemoteSerializer.h" #include "../RemoteSerializer.h"
#include "WriterBackend.h"
class SerializationFormat; class SerializationFormat;
class RemoteSerializer; class RemoteSerializer;
class RotationTimer; class RotationTimer;
namespace logging { namespace logging {
class WriterBackend;
class WriterFrontend; class WriterFrontend;
class RotationFinishedMessage; class RotationFinishedMessage;
@ -162,7 +163,7 @@ protected:
//// Function also used by the RemoteSerializer. //// Function also used by the RemoteSerializer.
// Takes ownership of fields. // Takes ownership of fields.
WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, string path, WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, const WriterBackend::WriterInfo& info,
int num_fields, const threading::Field* const* fields, int num_fields, const threading::Field* const* fields,
bool local, bool remote); bool local, bool remote);

View file

@ -4,6 +4,7 @@
#include "bro_inet_ntop.h" #include "bro_inet_ntop.h"
#include "threading/SerialTypes.h" #include "threading/SerialTypes.h"
#include "Manager.h"
#include "WriterBackend.h" #include "WriterBackend.h"
#include "WriterFrontend.h" #include "WriterFrontend.h"
@ -60,14 +61,61 @@ public:
using namespace logging; using namespace logging;
bool WriterBackend::WriterInfo::Read(SerializationFormat* fmt)
{
int size;
if ( ! (fmt->Read(&path, "path") &&
fmt->Read(&rotation_base, "rotation_base") &&
fmt->Read(&rotation_interval, "rotation_interval") &&
fmt->Read(&size, "config_size")) )
return false;
config.clear();
while ( size )
{
string value;
string key;
if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) )
return false;
config.insert(std::make_pair(value, key));
}
return true;
}
bool WriterBackend::WriterInfo::Write(SerializationFormat* fmt) const
{
int size = config.size();
if ( ! (fmt->Write(path, "path") &&
fmt->Write(rotation_base, "rotation_base") &&
fmt->Write(rotation_interval, "rotation_interval") &&
fmt->Write(size, "config_size")) )
return false;
for ( config_map::const_iterator i = config.begin(); i != config.end(); ++i )
{
if ( ! (fmt->Write(i->first, "config-value") && fmt->Write(i->second, "config-key")) )
return false;
}
return true;
}
WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread() WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread()
{ {
path = "<path not yet set>";
num_fields = 0; num_fields = 0;
fields = 0; fields = 0;
buffering = true; buffering = true;
frontend = arg_frontend; frontend = arg_frontend;
info.path = "<path not yet set>";
SetName(frontend->Name()); SetName(frontend->Name());
} }
@ -108,17 +156,17 @@ void WriterBackend::DisableFrontend()
SendOut(new DisableMessage(frontend)); SendOut(new DisableMessage(frontend));
} }
bool WriterBackend::Init(string arg_path, int arg_num_fields, const Field* const* arg_fields) bool WriterBackend::Init(const WriterInfo& arg_info, int arg_num_fields, const Field* const* arg_fields, const string& frontend_name)
{ {
path = arg_path; info = arg_info;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
string name = Fmt("%s/%s", path.c_str(), frontend->Name().c_str()); string name = Fmt("%s/%s", info.path.c_str(), frontend_name.c_str());
SetName(name); SetName(name);
if ( ! DoInit(arg_path, arg_num_fields, arg_fields) ) if ( ! DoInit(arg_info, arg_num_fields, arg_fields) )
{ {
DisableFrontend(); DisableFrontend();
return false; return false;

View file

@ -5,12 +5,14 @@
#ifndef LOGGING_WRITERBACKEND_H #ifndef LOGGING_WRITERBACKEND_H
#define LOGGING_WRITERBACKEND_H #define LOGGING_WRITERBACKEND_H
#include "Manager.h"
#include "threading/MsgThread.h" #include "threading/MsgThread.h"
class RemoteSerializer;
namespace logging { namespace logging {
class WriterFrontend;
/** /**
* Base class for writer implementation. When the logging::Manager creates a * Base class for writer implementation. When the logging::Manager creates a
* new logging filter, it instantiates a WriterFrontend. That then in turn * new logging filter, it instantiates a WriterFrontend. That then in turn
@ -41,21 +43,59 @@ public:
*/ */
virtual ~WriterBackend(); virtual ~WriterBackend();
/**
* A struct passing information to the writer at initialization time.
*/
struct WriterInfo
{
typedef std::map<string, string> config_map;
/**
* A string left to the interpretation of the writer
* implementation; it corresponds to the 'path' value configured
* on the script-level for the logging filter.
*/
string path;
/**
* The rotation interval as configured for this writer.
*/
double rotation_interval;
/**
* The parsed value of log_rotate_base_time in seconds.
*/
double rotation_base;
/**
* A map of key/value pairs corresponding to the relevant
* filter's "config" table.
*/
std::map<string, string> config;
private:
friend class ::RemoteSerializer;
// Note, these need to be adapted when changing the struct's
// fields. They serialize/deserialize the struct.
bool Read(SerializationFormat* fmt);
bool Write(SerializationFormat* fmt) const;
};
/** /**
* One-time initialization of the writer to define the logged fields. * One-time initialization of the writer to define the logged fields.
* *
* @param path A string left to the interpretation of the writer * @param info Meta information for the writer.
* implementation; it corresponds to the value configured on the * @param num_fields
* script-level for the logging filter.
*
* @param num_fields The number of log fields for the stream.
* *
* @param fields An array of size \a num_fields with the log fields. * @param fields An array of size \a num_fields with the log fields.
* The methods takes ownership of the array. * The methods takes ownership of the array.
* *
* @param frontend_name The name of the front-end writer implementation.
*
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Init(string path, int num_fields, const threading::Field* const* fields); bool Init(const WriterInfo& info, int num_fields, const threading::Field* const* fields, const string& frontend_name);
/** /**
* Writes one log entry. * Writes one log entry.
@ -108,9 +148,9 @@ public:
void DisableFrontend(); void DisableFrontend();
/** /**
* Returns the log path as passed into the constructor. * Returns the additional writer information passed into the constructor.
*/ */
const string Path() const { return path; } const WriterInfo& Info() const { return info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
@ -185,7 +225,7 @@ protected:
* disabled and eventually deleted. When returning false, an * disabled and eventually deleted. When returning false, an
* implementation should also call Error() to indicate what happened. * implementation should also call Error() to indicate what happened.
*/ */
virtual bool DoInit(string path, int num_fields, virtual bool DoInit(const WriterInfo& info, int num_fields,
const threading::Field* const* fields) = 0; const threading::Field* const* fields) = 0;
/** /**
@ -299,7 +339,7 @@ private:
// this class, it's running in a different thread! // this class, it's running in a different thread!
WriterFrontend* frontend; WriterFrontend* frontend;
string path; // Log path. WriterInfo info; // Meta information as passed to Init().
int num_fields; // Number of log fields. int num_fields; // Number of log fields.
const threading::Field* const* fields; // Log fields. const threading::Field* const* fields; // Log fields.
bool buffering; // True if buffering is enabled. bool buffering; // True if buffering is enabled.

View file

@ -2,6 +2,7 @@
#include "Net.h" #include "Net.h"
#include "threading/SerialTypes.h" #include "threading/SerialTypes.h"
#include "Manager.h"
#include "WriterFrontend.h" #include "WriterFrontend.h"
#include "WriterBackend.h" #include "WriterBackend.h"
@ -15,16 +16,18 @@ namespace logging {
class InitMessage : public threading::InputMessage<WriterBackend> class InitMessage : public threading::InputMessage<WriterBackend>
{ {
public: public:
InitMessage(WriterBackend* backend, const string path, const int num_fields, const Field* const* fields) InitMessage(WriterBackend* backend, const WriterBackend::WriterInfo& info, const int num_fields, const Field* const* fields, const string& frontend_name)
: threading::InputMessage<WriterBackend>("Init", backend), : threading::InputMessage<WriterBackend>("Init", backend),
path(path), num_fields(num_fields), fields(fields) { } info(info), num_fields(num_fields), fields(fields),
frontend_name(frontend_name) { }
virtual bool Process() { return Object()->Init(path, num_fields, fields); } virtual bool Process() { return Object()->Init(info, num_fields, fields, frontend_name); }
private: private:
const string path; WriterBackend::WriterInfo info;
const int num_fields; const int num_fields;
const Field * const* fields; const Field * const* fields;
const string frontend_name;
}; };
class RotateMessage : public threading::InputMessage<WriterBackend> class RotateMessage : public threading::InputMessage<WriterBackend>
@ -134,10 +137,10 @@ WriterFrontend::~WriterFrontend()
string WriterFrontend::Name() const string WriterFrontend::Name() const
{ {
if ( path.size() ) if ( ! info.path.size() )
return ty_name; return ty_name;
return ty_name + "/" + path; return ty_name + "/" + info.path;
} }
void WriterFrontend::Stop() void WriterFrontend::Stop()
@ -149,7 +152,7 @@ void WriterFrontend::Stop()
backend->Stop(); backend->Stop();
} }
void WriterFrontend::Init(string arg_path, int arg_num_fields, const Field* const * arg_fields) void WriterFrontend::Init(const WriterBackend::WriterInfo& arg_info, int arg_num_fields, const Field* const * arg_fields)
{ {
if ( disabled ) if ( disabled )
return; return;
@ -157,19 +160,19 @@ void WriterFrontend::Init(string arg_path, int arg_num_fields, const Field* cons
if ( initialized ) if ( initialized )
reporter->InternalError("writer initialize twice"); reporter->InternalError("writer initialize twice");
path = arg_path; info = arg_info;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
initialized = true; initialized = true;
if ( backend ) if ( backend )
backend->SendIn(new InitMessage(backend, arg_path, arg_num_fields, arg_fields)); backend->SendIn(new InitMessage(backend, arg_info, arg_num_fields, arg_fields, Name()));
if ( remote ) if ( remote )
remote_serializer->SendLogCreateWriter(stream, remote_serializer->SendLogCreateWriter(stream,
writer, writer,
arg_path, arg_info,
arg_num_fields, arg_num_fields,
arg_fields); arg_fields);
@ -183,7 +186,7 @@ void WriterFrontend::Write(int num_fields, Value** vals)
if ( remote ) if ( remote )
remote_serializer->SendLogWrite(stream, remote_serializer->SendLogWrite(stream,
writer, writer,
path, info.path,
num_fields, num_fields,
vals); vals);

View file

@ -3,13 +3,13 @@
#ifndef LOGGING_WRITERFRONTEND_H #ifndef LOGGING_WRITERFRONTEND_H
#define LOGGING_WRITERFRONTEND_H #define LOGGING_WRITERFRONTEND_H
#include "Manager.h" #include "WriterBackend.h"
#include "threading/MsgThread.h" #include "threading/MsgThread.h"
namespace logging { namespace logging {
class WriterBackend; class Manager;
/** /**
* Bridge class between the logging::Manager and backend writer threads. The * Bridge class between the logging::Manager and backend writer threads. The
@ -68,7 +68,7 @@ public:
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*/ */
void Init(string path, int num_fields, const threading::Field* const* fields); void Init(const WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const* fields);
/** /**
* Write out a record. * Write out a record.
@ -169,9 +169,9 @@ public:
bool Disabled() { return disabled; } bool Disabled() { return disabled; }
/** /**
* Returns the log path as passed into the constructor. * Returns the additional writer information as passed into the constructor.
*/ */
const string Path() const { return path; } const WriterBackend::WriterInfo& Info() const { return info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
@ -207,7 +207,7 @@ protected:
bool remote; // True if loggin remotely. bool remote; // True if loggin remotely.
string ty_name; // Name of the backend type. Set by the manager. string ty_name; // Name of the backend type. Set by the manager.
string path; // The log path. WriterBackend::WriterInfo info; // The writer information.
int num_fields; // The number of log fields. int num_fields; // The number of log fields.
const threading::Field* const* fields; // The log fields. const threading::Field* const* fields; // The log fields.

View file

@ -69,8 +69,10 @@ bool Ascii::WriteHeaderField(const string& key, const string& val)
return (fwrite(str.c_str(), str.length(), 1, file) == 1); return (fwrite(str.c_str(), str.length(), 1, file) == 1);
} }
bool Ascii::DoInit(string path, int num_fields, const Field* const * fields) bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const * fields)
{ {
string path = info.path;
if ( output_to_stdout ) if ( output_to_stdout )
path = "/dev/stdout"; path = "/dev/stdout";
@ -290,7 +292,7 @@ bool Ascii::DoWrite(int num_fields, const Field* const * fields,
Value** vals) Value** vals)
{ {
if ( ! file ) if ( ! file )
DoInit(Path(), NumFields(), Fields()); DoInit(Info(), NumFields(), Fields());
desc.Clear(); desc.Clear();
@ -320,7 +322,7 @@ bool Ascii::DoWrite(int num_fields, const Field* const * fields,
bool Ascii::DoRotate(string rotated_path, double open, double close, bool terminating) bool Ascii::DoRotate(string rotated_path, double open, double close, bool terminating)
{ {
// Don't rotate special files or if there's not one currently open. // Don't rotate special files or if there's not one currently open.
if ( ! file || IsSpecial(Path()) ) if ( ! file || IsSpecial(Info().path) )
return true; return true;
fclose(file); fclose(file);

View file

@ -19,7 +19,7 @@ public:
static string LogExt(); static string LogExt();
protected: protected:
virtual bool DoInit(string path, int num_fields, virtual bool DoInit(const WriterInfo& info, int num_fields,
const threading::Field* const* fields); const threading::Field* const* fields);
virtual bool DoWrite(int num_fields, const threading::Field* const* fields, virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
threading::Value** vals); threading::Value** vals);

View file

@ -263,7 +263,7 @@ bool DataSeries::OpenLog(string path)
return true; return true;
} }
bool DataSeries::DoInit(string path, int num_fields, const threading::Field* const * fields) bool DataSeries::DoInit(const WriterInfo& info, int num_fields, const threading::Field* const * fields)
{ {
// We first construct an XML schema thing (and, if ds_dump_schema is // We first construct an XML schema thing (and, if ds_dump_schema is
// set, dump it to path + ".ds.xml"). Assuming that goes well, we // set, dump it to path + ".ds.xml"). Assuming that goes well, we
@ -298,11 +298,11 @@ bool DataSeries::DoInit(string path, int num_fields, const threading::Field* con
schema_list.push_back(val); schema_list.push_back(val);
} }
string schema = BuildDSSchemaFromFieldTypes(schema_list, path); string schema = BuildDSSchemaFromFieldTypes(schema_list, info.path);
if( ds_dump_schema ) if( ds_dump_schema )
{ {
FILE* pFile = fopen ( string(path + ".ds.xml").c_str() , "wb" ); FILE* pFile = fopen ( string(info.path + ".ds.xml").c_str() , "wb" );
if( pFile ) if( pFile )
{ {
@ -340,7 +340,7 @@ bool DataSeries::DoInit(string path, int num_fields, const threading::Field* con
log_type = log_types.registerTypePtr(schema); log_type = log_types.registerTypePtr(schema);
log_series.setType(log_type); log_series.setType(log_type);
return OpenLog(path); return OpenLog(info.path);
} }
bool DataSeries::DoFlush() bool DataSeries::DoFlush()
@ -401,7 +401,7 @@ bool DataSeries::DoRotate(string rotated_path, double open, double close, bool t
// size will be (much) larger. // size will be (much) larger.
CloseLog(); CloseLog();
string dsname = Path() + ".ds"; string dsname = Info().path + ".ds";
string nname = rotated_path + ".ds"; string nname = rotated_path + ".ds";
rename(dsname.c_str(), nname.c_str()); rename(dsname.c_str(), nname.c_str());
@ -411,7 +411,7 @@ bool DataSeries::DoRotate(string rotated_path, double open, double close, bool t
return false; return false;
} }
return OpenLog(Path()); return OpenLog(Info().path);
} }
bool DataSeries::DoSetBuf(bool enabled) bool DataSeries::DoSetBuf(bool enabled)

View file

@ -26,7 +26,7 @@ public:
protected: protected:
// Overidden from WriterBackend. // Overidden from WriterBackend.
virtual bool DoInit(string path, int num_fields, virtual bool DoInit(const WriterInfo& info, int num_fields,
const threading::Field* const * fields); const threading::Field* const * fields);
virtual bool DoWrite(int num_fields, const threading::Field* const* fields, virtual bool DoWrite(int num_fields, const threading::Field* const* fields,

View file

@ -1,14 +1,41 @@
#include "None.h" #include "None.h"
#include "NetVar.h"
using namespace logging; using namespace logging;
using namespace writer; using namespace writer;
bool None::DoInit(const WriterInfo& info, int num_fields,
const threading::Field* const * fields)
{
if ( BifConst::LogNone::debug )
{
std::cout << "[logging::writer::None]" << std::endl;
std::cout << " path=" << info.path << std::endl;
std::cout << " rotation_interval=" << info.rotation_interval << std::endl;
std::cout << " rotation_base=" << info.rotation_base << std::endl;
for ( std::map<string,string>::const_iterator i = info.config.begin(); i != info.config.end(); i++ )
std::cout << " config[" << i->first << "] = " << i->second << std::endl;
for ( int i = 0; i < num_fields; i++ )
{
const threading::Field* field = fields[i];
std::cout << " field " << field->name << ": "
<< type_name(field->type) << std::endl;
}
std::cout << std::endl;
}
return true;
}
bool None::DoRotate(string rotated_path, double open, double close, bool terminating) bool None::DoRotate(string rotated_path, double open, double close, bool terminating)
{ {
if ( ! FinishedRotation(string("/dev/null"), Path(), open, close, terminating)) if ( ! FinishedRotation(string("/dev/null"), Info().path, open, close, terminating))
{ {
Error(Fmt("error rotating %s", Path().c_str())); Error(Fmt("error rotating %s", Info().path.c_str()));
return false; return false;
} }

View file

@ -18,8 +18,8 @@ public:
{ return new None(frontend); } { return new None(frontend); }
protected: protected:
virtual bool DoInit(string path, int num_fields, virtual bool DoInit(const WriterInfo& info, int num_fields,
const threading::Field* const * fields) { return true; } const threading::Field* const * fields);
virtual bool DoWrite(int num_fields, const threading::Field* const* fields, virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
threading::Value** vals) { return true; } threading::Value** vals) { return true; }
@ -27,7 +27,7 @@ protected:
virtual bool DoRotate(string rotated_path, double open, virtual bool DoRotate(string rotated_path, double open,
double close, bool terminating); double close, bool terminating);
virtual bool DoFlush() { return true; } virtual bool DoFlush() { return true; }
virtual bool DoFinish() { return true; } virtual bool DoFinish() { WriterBackend::DoFinish(); return true; }
}; };
} }

View file

@ -317,6 +317,8 @@ void terminate_bro()
if ( remote_serializer ) if ( remote_serializer )
remote_serializer->LogStats(); remote_serializer->LogStats();
mgr.Drain();
log_mgr->Terminate(); log_mgr->Terminate();
thread_mgr->Terminate(); thread_mgr->Terminate();

170
src/socks-analyzer.pac Normal file
View file

@ -0,0 +1,170 @@
%header{
StringVal* array_to_string(vector<uint8> *a);
%}
%code{
StringVal* array_to_string(vector<uint8> *a)
{
int len = a->size();
char tmp[len];
char *s = tmp;
for ( vector<uint8>::iterator i = a->begin(); i != a->end(); *s++ = *i++ );
while ( len > 0 && tmp[len-1] == '\0' )
--len;
return new StringVal(len, tmp);
}
%}
refine connection SOCKS_Conn += {
function socks4_request(request: SOCKS4_Request): bool
%{
RecordVal* sa = new RecordVal(socks_address);
sa->Assign(0, new AddrVal(htonl(${request.addr})));
if ( ${request.v4a} )
sa->Assign(1, array_to_string(${request.name}));
BifEvent::generate_socks_request(bro_analyzer(),
bro_analyzer()->Conn(),
4,
${request.command},
sa,
new PortVal(${request.port} | TCP_PORT_MASK),
array_to_string(${request.user}));
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(true);
return true;
%}
function socks4_reply(reply: SOCKS4_Reply): bool
%{
RecordVal* sa = new RecordVal(socks_address);
sa->Assign(0, new AddrVal(htonl(${reply.addr})));
BifEvent::generate_socks_reply(bro_analyzer(),
bro_analyzer()->Conn(),
4,
${reply.status},
sa,
new PortVal(${reply.port} | TCP_PORT_MASK));
bro_analyzer()->ProtocolConfirmation();
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(false);
return true;
%}
function socks5_request(request: SOCKS5_Request): bool
%{
if ( ${request.reserved} != 0 )
{
bro_analyzer()->ProtocolViolation(fmt("invalid value in reserved field: %d", ${request.reserved}));
return false;
}
RecordVal* sa = new RecordVal(socks_address);
// This is dumb and there must be a better way (checking for presence of a field)...
switch ( ${request.remote_name.addr_type} )
{
case 1:
sa->Assign(0, new AddrVal(htonl(${request.remote_name.ipv4})));
break;
case 3:
sa->Assign(1, new StringVal(${request.remote_name.domain_name.name}.length(),
(const char*) ${request.remote_name.domain_name.name}.data()));
break;
case 4:
sa->Assign(0, new AddrVal(IPAddr(IPv6, (const uint32_t*) ${request.remote_name.ipv6}, IPAddr::Network)));
break;
default:
bro_analyzer()->ProtocolViolation(fmt("invalid SOCKSv5 addr type: %d", ${request.remote_name.addr_type}));
return false;
break;
}
BifEvent::generate_socks_request(bro_analyzer(),
bro_analyzer()->Conn(),
5,
${request.command},
sa,
new PortVal(${request.port} | TCP_PORT_MASK),
new StringVal(""));
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(true);
return true;
%}
function socks5_reply(reply: SOCKS5_Reply): bool
%{
RecordVal* sa = new RecordVal(socks_address);
// This is dumb and there must be a better way (checking for presence of a field)...
switch ( ${reply.bound.addr_type} )
{
case 1:
sa->Assign(0, new AddrVal(htonl(${reply.bound.ipv4})));
break;
case 3:
sa->Assign(1, new StringVal(${reply.bound.domain_name.name}.length(),
(const char*) ${reply.bound.domain_name.name}.data()));
break;
case 4:
sa->Assign(0, new AddrVal(IPAddr(IPv6, (const uint32_t*) ${reply.bound.ipv6}, IPAddr::Network)));
break;
default:
bro_analyzer()->ProtocolViolation(fmt("invalid SOCKSv5 addr type: %d", ${reply.bound.addr_type}));
return false;
break;
}
BifEvent::generate_socks_reply(bro_analyzer(),
bro_analyzer()->Conn(),
5,
${reply.reply},
sa,
new PortVal(${reply.port} | TCP_PORT_MASK));
bro_analyzer()->ProtocolConfirmation();
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(false);
return true;
%}
function version_error(version: uint8): bool
%{
bro_analyzer()->ProtocolViolation(fmt("unsupported/unknown SOCKS version %d", version));
return true;
%}
};
refine typeattr SOCKS_Version_Error += &let {
proc: bool = $context.connection.version_error(version);
};
refine typeattr SOCKS4_Request += &let {
proc: bool = $context.connection.socks4_request(this);
};
refine typeattr SOCKS4_Reply += &let {
proc: bool = $context.connection.socks4_reply(this);
};
refine typeattr SOCKS5_Request += &let {
proc: bool = $context.connection.socks5_request(this);
};
refine typeattr SOCKS5_Reply += &let {
proc: bool = $context.connection.socks5_reply(this);
};

119
src/socks-protocol.pac Normal file
View file

@ -0,0 +1,119 @@
type SOCKS_Version(is_orig: bool) = record {
version: uint8;
msg: case version of {
4 -> socks4_msg: SOCKS4_Message(is_orig);
5 -> socks5_msg: SOCKS5_Message(is_orig);
default -> socks_msg_fail: SOCKS_Version_Error(version);
};
};
type SOCKS_Version_Error(version: uint8) = record {
nothing: empty;
};
# SOCKS5 Implementation
type SOCKS5_Message(is_orig: bool) = case $context.connection.v5_past_authentication() of {
true -> msg: SOCKS5_Real_Message(is_orig);
false -> auth: SOCKS5_Auth_Negotiation(is_orig);
};
type SOCKS5_Auth_Negotiation(is_orig: bool) = case is_orig of {
true -> req: SOCKS5_Auth_Negotiation_Request;
false -> rep: SOCKS5_Auth_Negotiation_Reply;
};
type SOCKS5_Auth_Negotiation_Request = record {
method_count: uint8;
methods: uint8[method_count];
};
type SOCKS5_Auth_Negotiation_Reply = record {
selected_auth_method: uint8;
} &let {
past_auth = $context.connection.set_v5_past_authentication();
};
type SOCKS5_Real_Message(is_orig: bool) = case is_orig of {
true -> request: SOCKS5_Request;
false -> reply: SOCKS5_Reply;
};
type Domain_Name = record {
len: uint8;
name: bytestring &length=len;
} &byteorder = bigendian;
type SOCKS5_Address = record {
addr_type: uint8;
addr: case addr_type of {
1 -> ipv4: uint32;
3 -> domain_name: Domain_Name;
4 -> ipv6: uint32[4];
default -> err: bytestring &restofdata &transient;
};
} &byteorder = bigendian;
type SOCKS5_Request = record {
command: uint8;
reserved: uint8;
remote_name: SOCKS5_Address;
port: uint16;
} &byteorder = bigendian;
type SOCKS5_Reply = record {
reply: uint8;
reserved: uint8;
bound: SOCKS5_Address;
port: uint16;
} &byteorder = bigendian;
# SOCKS4 Implementation
type SOCKS4_Message(is_orig: bool) = case is_orig of {
true -> request: SOCKS4_Request;
false -> reply: SOCKS4_Reply;
};
type SOCKS4_Request = record {
command: uint8;
port: uint16;
addr: uint32;
user: uint8[] &until($element == 0);
host: case v4a of {
true -> name: uint8[] &until($element == 0); # v4a
false -> empty: uint8[] &length=0;
} &requires(v4a);
} &byteorder = bigendian &let {
v4a: bool = (addr <= 0x000000ff);
};
type SOCKS4_Reply = record {
zero: uint8;
status: uint8;
port: uint16;
addr: uint32;
} &byteorder = bigendian;
refine connection SOCKS_Conn += {
%member{
bool v5_authenticated_;
%}
%init{
v5_authenticated_ = false;
%}
function v5_past_authentication(): bool
%{
return v5_authenticated_;
%}
function set_v5_past_authentication(): bool
%{
v5_authenticated_ = true;
return true;
%}
};

24
src/socks.pac Normal file
View file

@ -0,0 +1,24 @@
%include binpac.pac
%include bro.pac
%extern{
#include "SOCKS.h"
%}
analyzer SOCKS withcontext {
connection: SOCKS_Conn;
flow: SOCKS_Flow;
};
connection SOCKS_Conn(bro_analyzer: BroAnalyzer) {
upflow = SOCKS_Flow(true);
downflow = SOCKS_Flow(false);
};
%include socks-protocol.pac
flow SOCKS_Flow(is_orig: bool) {
datagram = SOCKS_Version(is_orig) withcontext(connection, this);
};
%include socks-analyzer.pac

View file

@ -80,18 +80,22 @@ const char* BasicThread::Fmt(const char* format, ...)
void BasicThread::Start() void BasicThread::Start()
{ {
if ( started ) if ( started )
return; return;
if ( pthread_mutex_init(&terminate, 0) != 0 ) int err = pthread_mutex_init(&terminate, 0);
reporter->FatalError("Cannot create terminate mutex for thread %s", name.c_str()); if ( err != 0 )
reporter->FatalError("Cannot create terminate mutex for thread %s:%s", name.c_str(), strerror(err));
// We use this like a binary semaphore and acquire it immediately. // We use this like a binary semaphore and acquire it immediately.
if ( pthread_mutex_lock(&terminate) != 0 ) err = pthread_mutex_lock(&terminate);
reporter->FatalError("Cannot aquire terminate mutex for thread %s", name.c_str()); if ( err != 0 )
reporter->FatalError("Cannot aquire terminate mutex for thread %s:%s", name.c_str(), strerror(err));
if ( pthread_create(&pthread, 0, BasicThread::launcher, this) != 0 ) err = pthread_create(&pthread, 0, BasicThread::launcher, this);
reporter->FatalError("Cannot create thread %s", name.c_str()); if ( err != 0 )
reporter->FatalError("Cannot create thread %s:%s", name.c_str(), strerror(err));
DBG_LOG(DBG_THREADING, "Started thread %s", name.c_str()); DBG_LOG(DBG_THREADING, "Started thread %s", name.c_str());

View file

@ -170,6 +170,17 @@ enum ID %{
Unknown, Unknown,
%} %}
module Tunnel;
enum Type %{
NONE,
IP,
AYIYA,
TEREDO,
SOCKS,
%}
type EncapsulatingConn: record;
module Input; module Input;
enum Reader %{ enum Reader %{

View file

@ -633,12 +633,20 @@ static bool write_random_seeds(const char* write_file, uint32 seed,
static bool bro_rand_determistic = false; static bool bro_rand_determistic = false;
static unsigned int bro_rand_state = 0; static unsigned int bro_rand_state = 0;
static void bro_srand(unsigned int seed, bool deterministic) static void bro_srandom(unsigned int seed, bool deterministic)
{ {
bro_rand_state = seed; bro_rand_state = seed;
bro_rand_determistic = deterministic; bro_rand_determistic = deterministic;
srand(seed); srandom(seed);
}
void bro_srandom(unsigned int seed)
{
if ( bro_rand_determistic )
bro_rand_state = seed;
else
srandom(seed);
} }
void init_random_seed(uint32 seed, const char* read_file, const char* write_file) void init_random_seed(uint32 seed, const char* read_file, const char* write_file)
@ -705,7 +713,7 @@ void init_random_seed(uint32 seed, const char* read_file, const char* write_file
seeds_done = true; seeds_done = true;
} }
bro_srand(seed, seeds_done); bro_srandom(seed, seeds_done);
if ( ! hmac_key_set ) if ( ! hmac_key_set )
{ {
@ -1082,18 +1090,8 @@ const char* log_file_name(const char* tag)
return fmt("%s.%s", tag, (env ? env : "log")); return fmt("%s.%s", tag, (env ? env : "log"));
} }
double calc_next_rotate(double interval, const char* rotate_base_time) double parse_rotate_base_time(const char* rotate_base_time)
{ {
double current = network_time;
// Calculate start of day.
time_t teatime = time_t(current);
struct tm t;
t = *localtime(&teatime);
t.tm_hour = t.tm_min = t.tm_sec = 0;
double startofday = mktime(&t);
double base = -1; double base = -1;
if ( rotate_base_time && rotate_base_time[0] != '\0' ) if ( rotate_base_time && rotate_base_time[0] != '\0' )
@ -1105,6 +1103,19 @@ double calc_next_rotate(double interval, const char* rotate_base_time)
base = t.tm_min * 60 + t.tm_hour * 60 * 60; base = t.tm_min * 60 + t.tm_hour * 60 * 60;
} }
return base;
}
double calc_next_rotate(double current, double interval, double base)
{
// Calculate start of day.
time_t teatime = time_t(current);
struct tm t;
t = *localtime_r(&teatime, &t);
t.tm_hour = t.tm_min = t.tm_sec = 0;
double startofday = mktime(&t);
if ( base < 0 ) if ( base < 0 )
// No base time given. To get nice timestamps, we round // No base time given. To get nice timestamps, we round
// the time up to the next multiple of the rotation interval. // the time up to the next multiple of the rotation interval.

View file

@ -160,6 +160,10 @@ extern bool have_random_seed();
// predictable PRNG. // predictable PRNG.
long int bro_random(); long int bro_random();
// Calls the system srandom() function with the given seed if not running
// in deterministic mode, else it updates the state of the deterministic PRNG.
void bro_srandom(unsigned int seed);
extern uint64 rand64bit(); extern uint64 rand64bit();
// Each event source that may generate events gets an internally unique ID. // Each event source that may generate events gets an internally unique ID.
@ -194,9 +198,22 @@ extern FILE* rotate_file(const char* name, RecordVal* rotate_info);
// This mimics the script-level function with the same name. // This mimics the script-level function with the same name.
const char* log_file_name(const char* tag); const char* log_file_name(const char* tag);
// Parse a time string of the form "HH:MM" (as used for the rotation base
// time) into a double representing the number of seconds. Returns -1 if the
// string cannot be parsed. The function's result is intended to be used with
// calc_next_rotate().
//
// This function is not thread-safe.
double parse_rotate_base_time(const char* rotate_base_time);
// Calculate the duration until the next time a file is to be rotated, based // Calculate the duration until the next time a file is to be rotated, based
// on the given rotate_interval and rotate_base_time. // on the given rotate_interval and rotate_base_time. 'current' the the
double calc_next_rotate(double rotate_interval, const char* rotate_base_time); // current time to be used as base, 'rotate_interval' the rotation interval,
// and 'base' the value returned by parse_rotate_base_time(). For the latter,
// if the function returned -1, that's fine, calc_next_rotate() handles that.
//
// This function is thread-safe.
double calc_next_rotate(double current, double rotate_interval, double base);
// Terminates processing gracefully, similar to pressing CTRL-C. // Terminates processing gracefully, similar to pressing CTRL-C.
void terminate_processing(); void terminate_processing();

View file

@ -1,6 +1,6 @@
185 985
236 474
805 738
47 4
996 634
498 473

View file

@ -0,0 +1,6 @@
985
474
738
974
371
638

View file

@ -0,0 +1,18 @@
ftp field missing
[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp]
ftp field missing
[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp]

View file

@ -5,12 +5,12 @@
#path reporter #path reporter
#fields ts level message location #fields ts level message location
#types time enum string string #types time enum string string
1300475168.783842 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.783842 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.915940 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.915940 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.916118 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.916118 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.918295 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.918295 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.952193 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.952193 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.952228 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.952228 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.954761 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.954761 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.962628 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475168.962628 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475169.780331 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 1300475169.780331 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10

View file

@ -41,6 +41,7 @@ icmp_echo_reply (id=1, seq=6, payload=abcdefghijklmnopqrstuvwabcdefghi)
icmp_redirect (tgt=fe80::cafe, dest=fe80::babe) icmp_redirect (tgt=fe80::cafe, dest=fe80::babe)
conn_id: [orig_h=fe80::dead, orig_p=137/icmp, resp_h=fe80::beef, resp_p=0/icmp] conn_id: [orig_h=fe80::dead, orig_p=137/icmp, resp_h=fe80::beef, resp_p=0/icmp]
icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=137, icode=0, len=32, hlim=255, v6=T] icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=137, icode=0, len=32, hlim=255, v6=T]
options: []
icmp_router_advertisement icmp_router_advertisement
cur_hop_limit=13 cur_hop_limit=13
managed=T managed=T
@ -54,15 +55,19 @@ icmp_router_advertisement
retrans_timer=1.0 sec 300.0 msecs retrans_timer=1.0 sec 300.0 msecs
conn_id: [orig_h=fe80::dead, orig_p=134/icmp, resp_h=fe80::beef, resp_p=133/icmp] conn_id: [orig_h=fe80::dead, orig_p=134/icmp, resp_h=fe80::beef, resp_p=133/icmp]
icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=134, icode=0, len=8, hlim=255, v6=T] icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=134, icode=0, len=8, hlim=255, v6=T]
options: []
icmp_neighbor_advertisement (tgt=fe80::babe) icmp_neighbor_advertisement (tgt=fe80::babe)
router=T router=T
solicited=F solicited=F
override=T override=T
conn_id: [orig_h=fe80::dead, orig_p=136/icmp, resp_h=fe80::beef, resp_p=135/icmp] conn_id: [orig_h=fe80::dead, orig_p=136/icmp, resp_h=fe80::beef, resp_p=135/icmp]
icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=136, icode=0, len=16, hlim=255, v6=T] icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=136, icode=0, len=16, hlim=255, v6=T]
options: []
icmp_router_solicitation icmp_router_solicitation
conn_id: [orig_h=fe80::dead, orig_p=133/icmp, resp_h=fe80::beef, resp_p=134/icmp] conn_id: [orig_h=fe80::dead, orig_p=133/icmp, resp_h=fe80::beef, resp_p=134/icmp]
icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=133, icode=0, len=0, hlim=255, v6=T] icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=133, icode=0, len=0, hlim=255, v6=T]
options: []
icmp_neighbor_solicitation (tgt=fe80::babe) icmp_neighbor_solicitation (tgt=fe80::babe)
conn_id: [orig_h=fe80::dead, orig_p=135/icmp, resp_h=fe80::beef, resp_p=136/icmp] conn_id: [orig_h=fe80::dead, orig_p=135/icmp, resp_h=fe80::beef, resp_p=136/icmp]
icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=135, icode=0, len=16, hlim=255, v6=T] icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=135, icode=0, len=16, hlim=255, v6=T]
options: []

Some files were not shown because too many files have changed in this diff Show more