mirror of
https://github.com/zeek/zeek.git
synced 2025-10-05 08:08:19 +00:00
Merge branch 'topic/bernhard/reader-info' into topic/bernhard/sqlite
Now uses optional dbname configuration option Conflicts: scripts/base/frameworks/logging/__load__.bro src/logging.bif
This commit is contained in:
commit
b8ad4567fb
167 changed files with 5537 additions and 1769 deletions
80
CHANGES
80
CHANGES
|
@ -1,4 +1,84 @@
|
||||||
|
|
||||||
|
2.0-709 | 2012-06-21 10:14:24 -0700
|
||||||
|
|
||||||
|
* Fix exceptions thrown in event handlers preventing others from running. (Jon Siwek)
|
||||||
|
|
||||||
|
* Add another SOCKS command. (Seth Hall)
|
||||||
|
|
||||||
|
* Fixed some problems with the SOCKS analyzer and tests. (Seth Hall)
|
||||||
|
|
||||||
|
* Updating NEWS in preparation for beta. (Robin Sommer)
|
||||||
|
|
||||||
|
* Accepting different AF_INET6 values for loopback link headers.
|
||||||
|
(Robin Sommer)
|
||||||
|
|
||||||
|
2.0-698 | 2012-06-20 14:30:40 -0700
|
||||||
|
|
||||||
|
* Updates for the SOCKS analyzer (Seth Hall).
|
||||||
|
|
||||||
|
- A SOCKS log!
|
||||||
|
|
||||||
|
- Now supports SOCKSv5 in the analyzer and the DPD sigs.
|
||||||
|
|
||||||
|
- Added protocol violations.
|
||||||
|
|
||||||
|
* Updates to the tunnels framework. (Seth Hall)
|
||||||
|
|
||||||
|
- Make the uid field optional since it's conceptually incorrect
|
||||||
|
for proxies being treated as tunnels to have it.
|
||||||
|
|
||||||
|
- Reordered two fields in the log.
|
||||||
|
|
||||||
|
- Reduced the default tunnel expiration interface to something
|
||||||
|
more reasonable (1 hour).
|
||||||
|
|
||||||
|
* Make Teredo bubble packet parsing more lenient. (Jon Siwek)
|
||||||
|
|
||||||
|
* Fix a crash in NetSessions::ParseIPPacket(). (Jon Siwek)
|
||||||
|
|
||||||
|
2.0-690 | 2012-06-18 16:01:33 -0700
|
||||||
|
|
||||||
|
* Support for decapsulating tunnels via the new tunnel framework in
|
||||||
|
base/frameworks/tunnels.
|
||||||
|
|
||||||
|
Bro currently supports Teredo, AYIYA, IP-in-IP (both IPv4 and
|
||||||
|
IPv6), and SOCKS. For all these, it logs the outher tunnel
|
||||||
|
connections in both conn.log and tunnel.log, and proceeds to
|
||||||
|
analyze the inner payload as if it were not tunneled, including
|
||||||
|
also logging it in conn.log (with a new tunnel_parents column
|
||||||
|
pointing back to the outer connection(s)). (Jon Siwek, Seth Hall,
|
||||||
|
Gregor Maier)
|
||||||
|
|
||||||
|
* The options "tunnel_port" and "parse_udp_tunnels" have been
|
||||||
|
removed. (Jon Siwek)
|
||||||
|
|
||||||
|
2.0-623 | 2012-06-15 16:24:52 -0700
|
||||||
|
|
||||||
|
* Changing an error in the input framework to a warning. (Robin
|
||||||
|
Sommer)
|
||||||
|
|
||||||
|
2.0-622 | 2012-06-15 15:38:43 -0700
|
||||||
|
|
||||||
|
* Input framework updates. (Bernhard Amann)
|
||||||
|
|
||||||
|
- Disable streaming reads from executed commands. This lead to
|
||||||
|
hanging Bros because pclose apparently can wait for eternity if
|
||||||
|
things go wrong.
|
||||||
|
|
||||||
|
- Automatically delete disabled input streams.
|
||||||
|
|
||||||
|
- Documentation.
|
||||||
|
|
||||||
|
2.0-614 | 2012-06-15 15:19:49 -0700
|
||||||
|
|
||||||
|
* Remove an old, unused diff canonifier. (Jon Siwek)
|
||||||
|
|
||||||
|
* Improve an error message in ICMP analyzer. (Jon Siwek)
|
||||||
|
|
||||||
|
* Fix a warning message when building docs. (Daniel Thayer)
|
||||||
|
|
||||||
|
* Fix many errors in the event documentation. (Daniel Thayer)
|
||||||
|
|
||||||
2.0-608 | 2012-06-11 15:59:00 -0700
|
2.0-608 | 2012-06-11 15:59:00 -0700
|
||||||
|
|
||||||
* Add more error handling code to logging of enum vals. Addresses
|
* Add more error handling code to logging of enum vals. Addresses
|
||||||
|
|
77
NEWS
77
NEWS
|
@ -6,10 +6,71 @@ This document summarizes the most important changes in the current Bro
|
||||||
release. For a complete list of changes, see the ``CHANGES`` file.
|
release. For a complete list of changes, see the ``CHANGES`` file.
|
||||||
|
|
||||||
|
|
||||||
Bro 2.1
|
Bro 2.1 Beta
|
||||||
-------
|
------------
|
||||||
|
|
||||||
- Dependencies:
|
New Functionality
|
||||||
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
- Bro now comes with extensive IPv6 support. Past versions offered
|
||||||
|
only basic IPv6 functionality that was rarely used in practice as it
|
||||||
|
had to be enabled explicitly. IPv6 support is now fully integrated
|
||||||
|
into all parts of Bro including protocol analysis and the scripting
|
||||||
|
language. It's on by default and no longer requires any special
|
||||||
|
configuration.
|
||||||
|
|
||||||
|
Some of the most significant enhancements include support for IPv6
|
||||||
|
fragment reassembly, support for following IPv6 extension header
|
||||||
|
chains, and support for tunnel decapsulation (6to4 and Teredo). The
|
||||||
|
DNS analyzer now handles AAAA records properly, and DNS lookups that
|
||||||
|
Bro itself performs now include AAAA queries, so that, for example,
|
||||||
|
the result returned by script-level lookups is a set that can
|
||||||
|
contain both IPv4 and IPv6 addresses. Support for the most common
|
||||||
|
ICMPv6 message types has been added. Also, the FTP EPSV and EPRT
|
||||||
|
commands are now handled properly. Internally, the way IP addresses
|
||||||
|
are stored has been improved, so Bro can handle both IPv4
|
||||||
|
and IPv6 by default without any special configuration.
|
||||||
|
|
||||||
|
In addition to Bro itself, the other Bro components have also been
|
||||||
|
made IPv6-aware by default. In particular, significant changes were
|
||||||
|
made to trace-summary, PySubnetTree, and Broccoli to support IPv6.
|
||||||
|
|
||||||
|
- Bro now decapsulates tunnels via its new tunnel framework located in
|
||||||
|
scripts/base/frameworks/tunnels. It currently supports Teredo,
|
||||||
|
AYIYA, IP-in-IP (both IPv4 and IPv6), and SOCKS. For all these, it
|
||||||
|
logs the outher tunnel connections in both conn.log and tunnel.log,
|
||||||
|
and then proceeds to analyze the inner payload as if it were not
|
||||||
|
tunneled, including also logging that session in conn.log. For
|
||||||
|
SOCKS, it generates a new socks.log in addition with more
|
||||||
|
information.
|
||||||
|
|
||||||
|
- Bro now features a flexible input framework that allows users to
|
||||||
|
integrate external information in real-time into Bro while it
|
||||||
|
processing network traffic. The most direct use-case at the moment
|
||||||
|
is reading data from ASCII files into Bro tables, with updates
|
||||||
|
picked up automatically when the file changes during runtime. See
|
||||||
|
doc/input.rst for more information.
|
||||||
|
|
||||||
|
Internally, the input framework is structured around the notion of
|
||||||
|
"reader plugins" that make it easy to interface to different data
|
||||||
|
sources. We will add more in the future.
|
||||||
|
|
||||||
|
- Bro's default ASCII log format is not exactly the most efficient way
|
||||||
|
for storing and searching large volumes of data. An an alternative,
|
||||||
|
Bro nows comes with experimental support for DataSeries output, an
|
||||||
|
efficient binary format for recording structured bulk data.
|
||||||
|
DataSeries is developed and maintained at HP Labs. See
|
||||||
|
doc/logging-dataseries for more information.
|
||||||
|
|
||||||
|
|
||||||
|
Changed Functionality
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
The following summarized the most important differences in existing
|
||||||
|
functionality. Note that this list is not complete, see CHANGES for
|
||||||
|
the full set.
|
||||||
|
|
||||||
|
- Changes in dependencies:
|
||||||
|
|
||||||
* Bro now requires CMake >= 2.6.3.
|
* Bro now requires CMake >= 2.6.3.
|
||||||
|
|
||||||
|
@ -17,8 +78,7 @@ Bro 2.1
|
||||||
configure time. Doing so can significantly improve memory and
|
configure time. Doing so can significantly improve memory and
|
||||||
CPU use.
|
CPU use.
|
||||||
|
|
||||||
- Bro now supports IPv6 out of the box; the configure switch
|
- The configure switch --enable-brov6 is gone.
|
||||||
--enable-brov6 is gone.
|
|
||||||
|
|
||||||
- DNS name lookups performed by Bro now also query AAAA records. The
|
- DNS name lookups performed by Bro now also query AAAA records. The
|
||||||
results of the A and AAAA queries for a given hostname are combined
|
results of the A and AAAA queries for a given hostname are combined
|
||||||
|
@ -35,7 +95,7 @@ Bro 2.1
|
||||||
- The syntax for IPv6 literals changed from "2607:f8b0:4009:802::1012"
|
- The syntax for IPv6 literals changed from "2607:f8b0:4009:802::1012"
|
||||||
to "[2607:f8b0:4009:802::1012]".
|
to "[2607:f8b0:4009:802::1012]".
|
||||||
|
|
||||||
- Bro now spawn threads for doing its logging. From a user's
|
- Bro now spawns threads for doing its logging. From a user's
|
||||||
perspective not much should change, except that the OS may now show
|
perspective not much should change, except that the OS may now show
|
||||||
a bunch of Bro threads.
|
a bunch of Bro threads.
|
||||||
|
|
||||||
|
@ -60,7 +120,10 @@ Bro 2.1
|
||||||
signature_files constant, this can be used to load signatures
|
signature_files constant, this can be used to load signatures
|
||||||
relative to the current script (e.g., "@load-sigs ./foo.sig").
|
relative to the current script (e.g., "@load-sigs ./foo.sig").
|
||||||
|
|
||||||
TODO: Extend.
|
- The options "tunnel_port" and "parse_udp_tunnels" have been removed.
|
||||||
|
Bro now supports decapsulating tunnels directly for protocols it
|
||||||
|
understands.
|
||||||
|
|
||||||
|
|
||||||
Bro 2.0
|
Bro 2.0
|
||||||
-------
|
-------
|
||||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
||||||
2.0-608
|
2.0-709
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
Subproject commit 0d139c09d5a9c8623ecc2a5f395178f0ddcd7e16
|
Subproject commit f1b0a395ab32388d8375ab72ec263b6029833f96
|
|
@ -1 +1 @@
|
||||||
Subproject commit 4697bf4c8046a3ab7d5e00e926c5db883cb44664
|
Subproject commit 585645371256e8ec028cabae24c5f4a2108546d2
|
|
@ -165,6 +165,10 @@
|
||||||
#ifndef HAVE_IPPROTO_IPV6
|
#ifndef HAVE_IPPROTO_IPV6
|
||||||
#define IPPROTO_IPV6 41
|
#define IPPROTO_IPV6 41
|
||||||
#endif
|
#endif
|
||||||
|
#cmakedefine HAVE_IPPROTO_IPV4
|
||||||
|
#ifndef HAVE_IPPROTO_IPV4
|
||||||
|
#define IPPROTO_IPV4 4
|
||||||
|
#endif
|
||||||
#cmakedefine HAVE_IPPROTO_ROUTING
|
#cmakedefine HAVE_IPPROTO_ROUTING
|
||||||
#ifndef HAVE_IPPROTO_ROUTING
|
#ifndef HAVE_IPPROTO_ROUTING
|
||||||
#define IPPROTO_ROUTING 43
|
#define IPPROTO_ROUTING 43
|
||||||
|
|
422
doc/input.rst
422
doc/input.rst
|
@ -1,80 +1,276 @@
|
||||||
=====================
|
==============================================
|
||||||
Loading Data into Bro
|
Loading Data into Bro with the Input Framework
|
||||||
=====================
|
==============================================
|
||||||
|
|
||||||
.. rst-class:: opening
|
.. rst-class:: opening
|
||||||
|
|
||||||
Bro comes with a flexible input interface that allows to read
|
Bro now features a flexible input frameworks that allows users
|
||||||
previously stored data. Data is either read into bro tables or
|
to import data into Bro. Data is either read into Bro tables or
|
||||||
sent to scripts using events.
|
converted to events which can then be handled by scripts.
|
||||||
This document describes how the input framework can be used.
|
|
||||||
|
The input framework is merged into the git master and we
|
||||||
|
will give a short summary on how to use it.
|
||||||
|
The input framework is automatically compiled and installed
|
||||||
|
together with Bro. The interface to it is exposed via the
|
||||||
|
scripting layer.
|
||||||
|
|
||||||
|
This document gives the most common examples. For more complex
|
||||||
|
scenarios it is worthwhile to take a look at the unit tests in
|
||||||
|
``testing/btest/scripts/base/frameworks/input/``.
|
||||||
|
|
||||||
.. contents::
|
.. contents::
|
||||||
|
|
||||||
Terminology
|
Reading Data into Tables
|
||||||
===========
|
========================
|
||||||
|
|
||||||
Bro's input framework is built around three main abstracts, that are
|
Probably the most interesting use-case of the input framework is to
|
||||||
very similar to the abstracts used in the logging framework:
|
read data into a Bro table.
|
||||||
|
|
||||||
Input Streams
|
By default, the input framework reads the data in the same format
|
||||||
An input stream corresponds to a single input source
|
as it is written by the logging framework in Bro - a tab-separated
|
||||||
(usually a textfile). It defined the information necessary
|
ASCII file.
|
||||||
to find the source (e.g. the filename), the reader that it used
|
|
||||||
to get data from it (see below).
|
|
||||||
It also defines exactly what data is read from the input source.
|
|
||||||
There are two different kind of streams, event streams and table
|
|
||||||
streams.
|
|
||||||
By default, event streams generate an event for each line read
|
|
||||||
from the input source.
|
|
||||||
Table streams on the other hand read the input source in a bro
|
|
||||||
table for easy later access.
|
|
||||||
|
|
||||||
Readers
|
We will show the ways to read files into Bro with a simple example.
|
||||||
A reader defines the input format for the specific input stream.
|
For this example we assume that we want to import data from a blacklist
|
||||||
At the moment, Bro comes with two types of reader. The default reader is READER_ASCII,
|
that contains server IP addresses as well as the timestamp and the reason
|
||||||
which can read the tab seperated ASCII logfiles that were generated by the
|
for the block.
|
||||||
logging framework.
|
|
||||||
READER_RAW can files containing records separated by a character(like e.g. newline) and send
|
|
||||||
one event per line.
|
|
||||||
|
|
||||||
|
An example input file could look like this:
|
||||||
|
|
||||||
Event Streams
|
::
|
||||||
=============
|
|
||||||
|
|
||||||
For examples, please look at the unit tests in
|
#fields ip timestamp reason
|
||||||
``testing/btest/scripts/base/frameworks/input/``.
|
192.168.17.1 1333252748 Malware host
|
||||||
|
192.168.27.2 1330235733 Botnet server
|
||||||
|
192.168.250.3 1333145108 Virus detected
|
||||||
|
|
||||||
Event Streams are streams that generate an event for each line in of the input source.
|
To read a file into a Bro table, two record types have to be defined.
|
||||||
|
One contains the types and names of the columns that should constitute the
|
||||||
|
table keys and the second contains the types and names of the columns that
|
||||||
|
should constitute the table values.
|
||||||
|
|
||||||
For example, a simple stream retrieving the fields ``i`` and ``b`` from an inputSource
|
In our case, we want to be able to lookup IPs. Hence, our key record
|
||||||
could be defined as follows:
|
only contains the server IP. All other elements should be stored as
|
||||||
|
the table content.
|
||||||
|
|
||||||
|
The two records are defined as:
|
||||||
|
|
||||||
.. code:: bro
|
.. code:: bro
|
||||||
|
|
||||||
type Val: record {
|
type Idx: record {
|
||||||
i: int;
|
ip: addr;
|
||||||
b: bool;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
event line(description: Input::EventDescription, tpe: Input::Event, i: int, b: bool) {
|
type Val: record {
|
||||||
# work with event data
|
timestamp: time;
|
||||||
|
reason: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
ote that the names of the fields in the record definitions have to correspond to
|
||||||
|
the column names listed in the '#fields' line of the log file, in this case 'ip',
|
||||||
|
'timestamp', and 'reason'.
|
||||||
|
|
||||||
|
The log file is read into the table with a simple call of the add_table function:
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
global blacklist: table[addr] of Val = table();
|
||||||
|
|
||||||
|
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]);
|
||||||
|
Input::remove("blacklist");
|
||||||
|
|
||||||
|
With these three lines we first create an empty table that should contain the
|
||||||
|
blacklist data and then instruct the Input framework to open an input stream
|
||||||
|
named ``blacklist`` to read the data into the table. The third line removes the
|
||||||
|
input stream again, because we do not need it any more after the data has been
|
||||||
|
read.
|
||||||
|
|
||||||
|
Because some data files can - potentially - be rather big, the input framework
|
||||||
|
works asynchronously. A new thread is created for each new input stream.
|
||||||
|
This thread opens the input data file, converts the data into a Bro format and
|
||||||
|
sends it back to the main Bro thread.
|
||||||
|
|
||||||
|
Because of this, the data is not immediately accessible. Depending on the
|
||||||
|
size of the data source it might take from a few milliseconds up to a few seconds
|
||||||
|
until all data is present in the table. Please note that this means that when Bro
|
||||||
|
is running without an input source or on very short captured files, it might terminate
|
||||||
|
before the data is present in the system (because Bro already handled all packets
|
||||||
|
before the import thread finished).
|
||||||
|
|
||||||
|
Subsequent calls to an input source are queued until the previous action has been
|
||||||
|
completed. Because of this, it is, for example, possible to call ``add_table`` and
|
||||||
|
``remove`` in two subsequent lines: the ``remove`` action will remain queued until
|
||||||
|
the first read has been completed.
|
||||||
|
|
||||||
|
Once the input framework finishes reading from a data source, it fires the ``update_finished``
|
||||||
|
event. Once this event has been received all data from the input file is available
|
||||||
|
in the table.
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
event Input::update_finished(name: string, source: string) {
|
||||||
|
# now all data is in the table
|
||||||
|
print blacklist;
|
||||||
}
|
}
|
||||||
|
|
||||||
event bro_init {
|
The table can also already be used while the data is still being read - it just might
|
||||||
Input::add_event([$source="input.log", $name="input", $fields=Val, $ev=line]);
|
not contain all lines in the input file when the event has not yet fired. After it has
|
||||||
|
been populated it can be used like any other Bro table and blacklist entries easily be
|
||||||
|
tested:
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
if ( 192.168.18.12 in blacklist )
|
||||||
|
# take action
|
||||||
|
|
||||||
|
|
||||||
|
Re-reading and streaming data
|
||||||
|
-----------------------------
|
||||||
|
|
||||||
|
For many data sources, like for many blacklists, the source data is continually
|
||||||
|
changing. For this cases, the Bro input framework supports several ways to
|
||||||
|
deal with changing data files.
|
||||||
|
|
||||||
|
The first, very basic method is an explicit refresh of an input stream. When an input
|
||||||
|
stream is open, the function ``force_update`` can be called. This will trigger
|
||||||
|
a complete refresh of the table; any changed elements from the file will be updated.
|
||||||
|
After the update is finished the ``update_finished`` event will be raised.
|
||||||
|
|
||||||
|
In our example the call would look like:
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
Input::force_update("blacklist");
|
||||||
|
|
||||||
|
The input framework also supports two automatic refresh mode. The first mode
|
||||||
|
continually checks if a file has been changed. If the file has been changed, it
|
||||||
|
is re-read and the data in the Bro table is updated to reflect the current state.
|
||||||
|
Each time a change has been detected and all the new data has been read into the
|
||||||
|
table, the ``update_finished`` event is raised.
|
||||||
|
|
||||||
|
The second mode is a streaming mode. This mode assumes that the source data file
|
||||||
|
is an append-only file to which new data is continually appended. Bro continually
|
||||||
|
checks for new data at the end of the file and will add the new data to the table.
|
||||||
|
If newer lines in the file have the same index as previous lines, they will overwrite
|
||||||
|
the values in the output table.
|
||||||
|
Because of the nature of streaming reads (data is continually added to the table),
|
||||||
|
the ``update_finished`` event is never raised when using streaming reads.
|
||||||
|
|
||||||
|
The reading mode can be selected by setting the ``mode`` option of the add_table call.
|
||||||
|
Valid values are ``MANUAL`` (the default), ``REREAD`` and ``STREAM``.
|
||||||
|
|
||||||
|
Hence, when using adding ``$mode=Input::REREAD`` to the previous example, the blacklists
|
||||||
|
table will always reflect the state of the blacklist input file.
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]);
|
||||||
|
|
||||||
|
Receiving change events
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
When re-reading files, it might be interesting to know exactly which lines in the source
|
||||||
|
files have changed.
|
||||||
|
|
||||||
|
For this reason, the input framework can raise an event each time when a data item is added to,
|
||||||
|
removed from or changed in a table.
|
||||||
|
|
||||||
|
The event definition looks like this:
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) {
|
||||||
|
# act on values
|
||||||
}
|
}
|
||||||
|
|
||||||
The fields that can be set for an event stream are:
|
The event has to be specified in ``$ev`` in the ``add_table`` call:
|
||||||
|
|
||||||
``want_record``
|
.. code:: bro
|
||||||
Boolean value, that defines if the event wants to receive the fields inside of
|
|
||||||
a single record value, or individually (default).
|
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]);
|
||||||
|
|
||||||
|
The ``description`` field of the event contains the arguments that were originally supplied to the add_table call.
|
||||||
|
Hence, the name of the stream can, for example, be accessed with ``description$name``. ``tpe`` is an enum containing
|
||||||
|
the type of the change that occurred.
|
||||||
|
|
||||||
|
It will contain ``Input::EVENT_NEW``, when a line that was not previously been
|
||||||
|
present in the table has been added. In this case ``left`` contains the Index of the added table entry and ``right`` contains
|
||||||
|
the values of the added entry.
|
||||||
|
|
||||||
|
If a table entry that already was present is altered during the re-reading or streaming read of a file, ``tpe`` will contain
|
||||||
|
``Input::EVENT_CHANGED``. In this case ``left`` contains the Index of the changed table entry and ``right`` contains the
|
||||||
|
values of the entry before the change. The reason for this is, that the table already has been updated when the event is
|
||||||
|
raised. The current value in the table can be ascertained by looking up the current table value. Hence it is possible to compare
|
||||||
|
the new and the old value of the table.
|
||||||
|
|
||||||
|
``tpe`` contains ``Input::REMOVED``, when a table element is removed because it was no longer present during a re-read.
|
||||||
|
In this case ``left`` contains the index and ``right`` the values of the removed element.
|
||||||
|
|
||||||
|
|
||||||
|
Filtering data during import
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
The input framework also allows a user to filter the data during the import. To this end, predicate functions are used. A predicate
|
||||||
|
function is called before a new element is added/changed/removed from a table. The predicate can either accept or veto
|
||||||
|
the change by returning true for an accepted change and false for an rejected change. Furthermore, it can alter the data
|
||||||
|
before it is written to the table.
|
||||||
|
|
||||||
|
The following example filter will reject to add entries to the table when they were generated over a month ago. It
|
||||||
|
will accept all changes and all removals of values that are already present in the table.
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD,
|
||||||
|
$pred(typ: Input::Event, left: Idx, right: Val) = {
|
||||||
|
if ( typ != Input::EVENT_NEW ) {
|
||||||
|
return T;
|
||||||
|
}
|
||||||
|
return ( ( current_time() - right$timestamp ) < (30 day) );
|
||||||
|
}]);
|
||||||
|
|
||||||
|
To change elements while they are being imported, the predicate function can manipulate ``left`` and ``right``. Note
|
||||||
|
that predicate functions are called before the change is committed to the table. Hence, when a table element is changed ( ``tpe``
|
||||||
|
is ``INPUT::EVENT_CHANGED`` ), ``left`` and ``right`` contain the new values, but the destination (``blacklist`` in our example)
|
||||||
|
still contains the old values. This allows predicate functions to examine the changes between the old and the new version before
|
||||||
|
deciding if they should be allowed.
|
||||||
|
|
||||||
|
Different readers
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
The input framework supports different kinds of readers for different kinds of source data files. At the moment, the default
|
||||||
|
reader reads ASCII files formatted in the Bro log-file-format (tab-separated values). At the moment, Bro comes with two
|
||||||
|
other readers. The ``RAW`` reader reads a file that is split by a specified record separator (usually newline). The contents
|
||||||
|
are returned line-by-line as strings; it can, for example, be used to read configuration files and the like and is probably
|
||||||
|
only useful in the event mode and not for reading data to tables.
|
||||||
|
|
||||||
|
Another included reader is the ``BENCHMARK`` reader, which is being used to optimize the speed of the input framework. It
|
||||||
|
can generate arbitrary amounts of semi-random data in all Bro data types supported by the input framework.
|
||||||
|
|
||||||
|
In the future, the input framework will get support for new data sources like, for example, different databases.
|
||||||
|
|
||||||
|
Add_table options
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
This section lists all possible options that can be used for the add_table function and gives
|
||||||
|
a short explanation of their use. Most of the options already have been discussed in the
|
||||||
|
previous sections.
|
||||||
|
|
||||||
|
The possible fields that can be set for an table stream are:
|
||||||
|
|
||||||
``source``
|
``source``
|
||||||
A mandatory string identifying the source of the data.
|
A mandatory string identifying the source of the data.
|
||||||
For the ASCII reader this is the filename.
|
For the ASCII reader this is the filename.
|
||||||
|
|
||||||
|
``name``
|
||||||
|
A mandatory name for the filter that can later be used
|
||||||
|
to manipulate it further.
|
||||||
|
|
||||||
|
``idx``
|
||||||
|
Record type that defines the index of the table
|
||||||
|
|
||||||
|
``val``
|
||||||
|
Record type that defines the values of the table
|
||||||
|
|
||||||
``reader``
|
``reader``
|
||||||
The reader used for this stream. Default is ``READER_ASCII``.
|
The reader used for this stream. Default is ``READER_ASCII``.
|
||||||
|
|
||||||
|
@ -82,12 +278,70 @@ The fields that can be set for an event stream are:
|
||||||
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``.
|
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``.
|
||||||
Default is ``MANUAL``.
|
Default is ``MANUAL``.
|
||||||
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not
|
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not
|
||||||
be reflected in the data bro knows.
|
be reflected in the data Bro knows.
|
||||||
``REREAD`` means that the whole file is read again each time a change is found. This should be used for
|
``REREAD`` means that the whole file is read again each time a change is found. This should be used for
|
||||||
files that are mapped to a table where individual lines can change.
|
files that are mapped to a table where individual lines can change.
|
||||||
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new
|
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new
|
||||||
data is added to the file.
|
data is added to the file.
|
||||||
|
|
||||||
|
``destination``
|
||||||
|
The destination table
|
||||||
|
|
||||||
|
``ev``
|
||||||
|
Optional event that is raised, when values are added to, changed in or deleted from the table.
|
||||||
|
Events are passed an Input::Event description as the first argument, the index record as the second argument
|
||||||
|
and the values as the third argument.
|
||||||
|
|
||||||
|
``pred``
|
||||||
|
Optional predicate, that can prevent entries from being added to the table and events from being sent.
|
||||||
|
|
||||||
|
``want_record``
|
||||||
|
Boolean value, that defines if the event wants to receive the fields inside of
|
||||||
|
a single record value, or individually (default).
|
||||||
|
This can be used, if ``val`` is a record containing only one type. In this case,
|
||||||
|
if ``want_record`` is set to false, the table will contain elements of the type
|
||||||
|
contained in ``val``.
|
||||||
|
|
||||||
|
Reading data to events
|
||||||
|
======================
|
||||||
|
|
||||||
|
The second supported mode of the input framework is reading data to Bro events instead
|
||||||
|
of reading them to a table using event streams.
|
||||||
|
|
||||||
|
Event streams work very similarly to table streams that were already discussed in much
|
||||||
|
detail. To read the blacklist of the previous example into an event stream, the following
|
||||||
|
Bro code could be used:
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
type Val: record {
|
||||||
|
ip: addr;
|
||||||
|
timestamp: time;
|
||||||
|
reason: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) {
|
||||||
|
# work with event data
|
||||||
|
}
|
||||||
|
|
||||||
|
event bro_init() {
|
||||||
|
Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
The main difference in the declaration of the event stream is, that an event stream needs no
|
||||||
|
separate index and value declarations -- instead, all source data types are provided in a single
|
||||||
|
record definition.
|
||||||
|
|
||||||
|
Apart from this, event streams work exactly the same as table streams and support most of the options
|
||||||
|
that are also supported for table streams.
|
||||||
|
|
||||||
|
The options that can be set for when creating an event stream with ``add_event`` are:
|
||||||
|
|
||||||
|
``source``
|
||||||
|
A mandatory string identifying the source of the data.
|
||||||
|
For the ASCII reader this is the filename.
|
||||||
|
|
||||||
``name``
|
``name``
|
||||||
A mandatory name for the stream that can later be used
|
A mandatory name for the stream that can later be used
|
||||||
to remove it.
|
to remove it.
|
||||||
|
@ -102,82 +356,26 @@ The fields that can be set for an event stream are:
|
||||||
followed by the data, either inside of a record (if ``want_record is set``) or as
|
followed by the data, either inside of a record (if ``want_record is set``) or as
|
||||||
individual fields.
|
individual fields.
|
||||||
The Input::Event structure can contain information, if the received line is ``NEW``, has
|
The Input::Event structure can contain information, if the received line is ``NEW``, has
|
||||||
been ``CHANGED`` or ``DELETED``. Singe the ascii reader cannot track this information
|
been ``CHANGED`` or ``DELETED``. Singe the ASCII reader cannot track this information
|
||||||
for event filters, the value is always ``NEW`` at the moment.
|
for event filters, the value is always ``NEW`` at the moment.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Table Streams
|
|
||||||
=============
|
|
||||||
|
|
||||||
Table streams are the second, more complex type of input streams.
|
|
||||||
|
|
||||||
Table streams store the information they read from an input source in a bro table. For example,
|
|
||||||
when reading a file that contains ip addresses and connection attemt information one could use
|
|
||||||
an approach similar to this:
|
|
||||||
|
|
||||||
.. code:: bro
|
|
||||||
|
|
||||||
type Idx: record {
|
|
||||||
a: addr;
|
|
||||||
};
|
|
||||||
|
|
||||||
type Val: record {
|
|
||||||
tries: count;
|
|
||||||
};
|
|
||||||
|
|
||||||
global conn_attempts: table[addr] of count = table();
|
|
||||||
|
|
||||||
event bro_init {
|
|
||||||
Input::add_table([$source="input.txt", $name="input", $idx=Idx, $val=Val, $destination=conn_attempts]);
|
|
||||||
}
|
|
||||||
|
|
||||||
The table conn_attempts will then contain the information about connection attemps.
|
|
||||||
|
|
||||||
The possible fields that can be set for an table stream are:
|
|
||||||
|
|
||||||
``want_record``
|
|
||||||
Boolean value, that defines if the event wants to receive the fields inside of
|
|
||||||
a single record value, or individually (default).
|
|
||||||
|
|
||||||
``source``
|
|
||||||
A mandatory string identifying the source of the data.
|
|
||||||
For the ASCII reader this is the filename.
|
|
||||||
|
|
||||||
``reader``
|
|
||||||
The reader used for this stream. Default is ``READER_ASCII``.
|
|
||||||
|
|
||||||
``mode``
|
``mode``
|
||||||
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``.
|
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``.
|
||||||
Default is ``MANUAL``.
|
Default is ``MANUAL``.
|
||||||
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not
|
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not
|
||||||
be reflected in the data bro knows.
|
be reflected in the data Bro knows.
|
||||||
``REREAD`` means that the whole file is read again each time a change is found. This should be used for
|
``REREAD`` means that the whole file is read again each time a change is found. This should be used for
|
||||||
files that are mapped to a table where individual lines can change.
|
files that are mapped to a table where individual lines can change.
|
||||||
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new
|
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new
|
||||||
data is added to the file.
|
data is added to the file.
|
||||||
|
|
||||||
``name``
|
``reader``
|
||||||
A mandatory name for the filter that can later be used
|
The reader used for this stream. Default is ``READER_ASCII``.
|
||||||
to manipulate it further.
|
|
||||||
|
|
||||||
``idx``
|
|
||||||
Record type that defines the index of the table
|
|
||||||
|
|
||||||
``val``
|
|
||||||
Record type that defines the values of the table
|
|
||||||
|
|
||||||
``want_record``
|
``want_record``
|
||||||
Defines if the values of the table should be stored as a record (default),
|
Boolean value, that defines if the event wants to receive the fields inside of
|
||||||
or as a simple value. Has to be set if Val contains more than one element.
|
a single record value, or individually (default). If this is set to true, the
|
||||||
|
event will receive a single record of the type provided in ``fields``.
|
||||||
|
|
||||||
``destination``
|
|
||||||
The destination table
|
|
||||||
|
|
||||||
``ev``
|
|
||||||
Optional event that is raised, when values are added to, changed in or deleted from the table.
|
|
||||||
Events are passed an Input::Event description as the first argument, the index record as the second argument
|
|
||||||
and the values as the third argument.
|
|
||||||
|
|
||||||
``pred``
|
|
||||||
Optional predicate, that can prevent entries from being added to the table and events from being sent.
|
|
||||||
|
|
|
@ -59,6 +59,7 @@ rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
|
||||||
rest_target(${psd} base/frameworks/reporter/main.bro)
|
rest_target(${psd} base/frameworks/reporter/main.bro)
|
||||||
rest_target(${psd} base/frameworks/signatures/main.bro)
|
rest_target(${psd} base/frameworks/signatures/main.bro)
|
||||||
rest_target(${psd} base/frameworks/software/main.bro)
|
rest_target(${psd} base/frameworks/software/main.bro)
|
||||||
|
rest_target(${psd} base/frameworks/tunnels/main.bro)
|
||||||
rest_target(${psd} base/protocols/conn/contents.bro)
|
rest_target(${psd} base/protocols/conn/contents.bro)
|
||||||
rest_target(${psd} base/protocols/conn/inactivity.bro)
|
rest_target(${psd} base/protocols/conn/inactivity.bro)
|
||||||
rest_target(${psd} base/protocols/conn/main.bro)
|
rest_target(${psd} base/protocols/conn/main.bro)
|
||||||
|
@ -77,6 +78,8 @@ rest_target(${psd} base/protocols/irc/main.bro)
|
||||||
rest_target(${psd} base/protocols/smtp/entities-excerpt.bro)
|
rest_target(${psd} base/protocols/smtp/entities-excerpt.bro)
|
||||||
rest_target(${psd} base/protocols/smtp/entities.bro)
|
rest_target(${psd} base/protocols/smtp/entities.bro)
|
||||||
rest_target(${psd} base/protocols/smtp/main.bro)
|
rest_target(${psd} base/protocols/smtp/main.bro)
|
||||||
|
rest_target(${psd} base/protocols/socks/consts.bro)
|
||||||
|
rest_target(${psd} base/protocols/socks/main.bro)
|
||||||
rest_target(${psd} base/protocols/ssh/main.bro)
|
rest_target(${psd} base/protocols/ssh/main.bro)
|
||||||
rest_target(${psd} base/protocols/ssl/consts.bro)
|
rest_target(${psd} base/protocols/ssl/consts.bro)
|
||||||
rest_target(${psd} base/protocols/ssl/main.bro)
|
rest_target(${psd} base/protocols/ssl/main.bro)
|
||||||
|
|
|
@ -11,7 +11,8 @@ export {
|
||||||
## The communication logging stream identifier.
|
## The communication logging stream identifier.
|
||||||
redef enum Log::ID += { LOG };
|
redef enum Log::ID += { LOG };
|
||||||
|
|
||||||
## Which interface to listen on (``0.0.0.0`` or ``[::]`` are wildcards).
|
## Which interface to listen on. The addresses ``0.0.0.0`` and ``[::]``
|
||||||
|
## are wildcards.
|
||||||
const listen_interface = 0.0.0.0 &redef;
|
const listen_interface = 0.0.0.0 &redef;
|
||||||
|
|
||||||
## Which port to listen on.
|
## Which port to listen on.
|
||||||
|
|
|
@ -149,3 +149,64 @@ signature dpd_ssl_client {
|
||||||
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
|
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
|
||||||
tcp-state originator
|
tcp-state originator
|
||||||
}
|
}
|
||||||
|
|
||||||
|
signature dpd_ayiya {
|
||||||
|
ip-proto = udp
|
||||||
|
payload /^..\x11\x29/
|
||||||
|
enable "ayiya"
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_teredo {
|
||||||
|
ip-proto = udp
|
||||||
|
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
|
||||||
|
enable "teredo"
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_socks4_client {
|
||||||
|
ip-proto == tcp
|
||||||
|
# '32' is a rather arbitrary max length for the user name.
|
||||||
|
payload /^\x04[\x01\x02].{0,32}\x00/
|
||||||
|
tcp-state originator
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_socks4_server {
|
||||||
|
ip-proto == tcp
|
||||||
|
requires-reverse-signature dpd_socks4_client
|
||||||
|
payload /^\x00[\x5a\x5b\x5c\x5d]/
|
||||||
|
tcp-state responder
|
||||||
|
enable "socks"
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_socks4_reverse_client {
|
||||||
|
ip-proto == tcp
|
||||||
|
# '32' is a rather arbitrary max length for the user name.
|
||||||
|
payload /^\x04[\x01\x02].{0,32}\x00/
|
||||||
|
tcp-state responder
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_socks4_reverse_server {
|
||||||
|
ip-proto == tcp
|
||||||
|
requires-reverse-signature dpd_socks4_reverse_client
|
||||||
|
payload /^\x00[\x5a\x5b\x5c\x5d]/
|
||||||
|
tcp-state originator
|
||||||
|
enable "socks"
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_socks5_client {
|
||||||
|
ip-proto == tcp
|
||||||
|
# Watch for a few authentication methods to reduce false positives.
|
||||||
|
payload /^\x05.[\x00\x01\x02]/
|
||||||
|
tcp-state originator
|
||||||
|
}
|
||||||
|
|
||||||
|
signature dpd_socks5_server {
|
||||||
|
ip-proto == tcp
|
||||||
|
requires-reverse-signature dpd_socks5_client
|
||||||
|
# Watch for a single authentication method to be chosen by the server or
|
||||||
|
# the server to indicate the no authentication is required.
|
||||||
|
payload /^\x05(\x00|\x01[\x00\x01\x02])/
|
||||||
|
tcp-state responder
|
||||||
|
enable "socks"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -53,6 +53,10 @@ export {
|
||||||
## really be executed. Parameters are the same as for the event. If true is
|
## really be executed. Parameters are the same as for the event. If true is
|
||||||
## returned, the update is performed. If false is returned, it is skipped.
|
## returned, the update is performed. If false is returned, it is skipped.
|
||||||
pred: function(typ: Input::Event, left: any, right: any): bool &optional;
|
pred: function(typ: Input::Event, left: any, right: any): bool &optional;
|
||||||
|
|
||||||
|
## A key/value table that will be passed on the reader.
|
||||||
|
## Interpretation of the values is left to the reader.
|
||||||
|
config: table[string] of string &default=table();
|
||||||
};
|
};
|
||||||
|
|
||||||
## EventFilter description type used for the `event` method.
|
## EventFilter description type used for the `event` method.
|
||||||
|
@ -85,6 +89,9 @@ export {
|
||||||
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
|
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
|
||||||
ev: any;
|
ev: any;
|
||||||
|
|
||||||
|
## A key/value table that will be passed on the reader.
|
||||||
|
## Interpretation of the values is left to the reader.
|
||||||
|
config: table[string] of string &default=table();
|
||||||
};
|
};
|
||||||
|
|
||||||
## Create a new table input from a given source. Returns true on success.
|
## Create a new table input from a given source. Returns true on success.
|
||||||
|
|
|
@ -3,3 +3,4 @@
|
||||||
@load ./writers/ascii
|
@load ./writers/ascii
|
||||||
@load ./writers/dataseries
|
@load ./writers/dataseries
|
||||||
@load ./writers/sqlite
|
@load ./writers/sqlite
|
||||||
|
@load ./writers/none
|
||||||
|
|
|
@ -138,6 +138,10 @@ export {
|
||||||
## Callback function to trigger for rotated files. If not set, the
|
## Callback function to trigger for rotated files. If not set, the
|
||||||
## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
|
## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
|
||||||
postprocessor: function(info: RotationInfo) : bool &optional;
|
postprocessor: function(info: RotationInfo) : bool &optional;
|
||||||
|
|
||||||
|
## A key/value table that will be passed on the writer.
|
||||||
|
## Interpretation of the values is left to the writer.
|
||||||
|
config: table[string] of string &default=table();
|
||||||
};
|
};
|
||||||
|
|
||||||
## Sentinel value for indicating that a filter was not found when looked up.
|
## Sentinel value for indicating that a filter was not found when looked up.
|
||||||
|
@ -327,6 +331,8 @@ function __default_rotation_postprocessor(info: RotationInfo) : bool
|
||||||
{
|
{
|
||||||
if ( info$writer in default_rotation_postprocessors )
|
if ( info$writer in default_rotation_postprocessors )
|
||||||
return default_rotation_postprocessors[info$writer](info);
|
return default_rotation_postprocessors[info$writer](info);
|
||||||
|
|
||||||
|
return F;
|
||||||
}
|
}
|
||||||
|
|
||||||
function default_path_func(id: ID, path: string, rec: any) : string
|
function default_path_func(id: ID, path: string, rec: any) : string
|
||||||
|
|
17
scripts/base/frameworks/logging/writers/none.bro
Normal file
17
scripts/base/frameworks/logging/writers/none.bro
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
##! Interface for the None log writer. Thiis writer is mainly for debugging.
|
||||||
|
|
||||||
|
module LogNone;
|
||||||
|
|
||||||
|
export {
|
||||||
|
## If true, output some debugging output that can be useful for unit
|
||||||
|
##testing the logging framework.
|
||||||
|
const debug = F &redef;
|
||||||
|
}
|
||||||
|
|
||||||
|
function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool
|
||||||
|
{
|
||||||
|
return T;
|
||||||
|
}
|
||||||
|
|
||||||
|
redef Log::default_rotation_postprocessors += { [Log::WRITER_NONE] = default_rotation_postprocessor_func };
|
||||||
|
|
1
scripts/base/frameworks/tunnels/__load__.bro
Normal file
1
scripts/base/frameworks/tunnels/__load__.bro
Normal file
|
@ -0,0 +1 @@
|
||||||
|
@load ./main
|
149
scripts/base/frameworks/tunnels/main.bro
Normal file
149
scripts/base/frameworks/tunnels/main.bro
Normal file
|
@ -0,0 +1,149 @@
|
||||||
|
##! This script handles the tracking/logging of tunnels (e.g. Teredo,
|
||||||
|
##! AYIYA, or IP-in-IP such as 6to4 where "IP" is either IPv4 or IPv6).
|
||||||
|
##!
|
||||||
|
##! For any connection that occurs over a tunnel, information about its
|
||||||
|
##! encapsulating tunnels is also found in the *tunnel* field of
|
||||||
|
##! :bro:type:`connection`.
|
||||||
|
|
||||||
|
module Tunnel;
|
||||||
|
|
||||||
|
export {
|
||||||
|
## The tunnel logging stream identifier.
|
||||||
|
redef enum Log::ID += { LOG };
|
||||||
|
|
||||||
|
## Types of interesting activity that can occur with a tunnel.
|
||||||
|
type Action: enum {
|
||||||
|
## A new tunnel (encapsulating "connection") has been seen.
|
||||||
|
DISCOVER,
|
||||||
|
## A tunnel connection has closed.
|
||||||
|
CLOSE,
|
||||||
|
## No new connections over a tunnel happened in the amount of
|
||||||
|
## time indicated by :bro:see:`Tunnel::expiration_interval`.
|
||||||
|
EXPIRE,
|
||||||
|
};
|
||||||
|
|
||||||
|
## The record type which contains column fields of the tunnel log.
|
||||||
|
type Info: record {
|
||||||
|
## Time at which some tunnel activity occurred.
|
||||||
|
ts: time &log;
|
||||||
|
## The unique identifier for the tunnel, which may correspond
|
||||||
|
## to a :bro:type:`connection`'s *uid* field for non-IP-in-IP tunnels.
|
||||||
|
## This is optional because there could be numerous connections
|
||||||
|
## for payload proxies like SOCKS but we should treat it as a single
|
||||||
|
## tunnel.
|
||||||
|
uid: string &log &optional;
|
||||||
|
## The tunnel "connection" 4-tuple of endpoint addresses/ports.
|
||||||
|
## For an IP tunnel, the ports will be 0.
|
||||||
|
id: conn_id &log;
|
||||||
|
## The type of tunnel.
|
||||||
|
tunnel_type: Tunnel::Type &log;
|
||||||
|
## The type of activity that occurred.
|
||||||
|
action: Action &log;
|
||||||
|
};
|
||||||
|
|
||||||
|
## Logs all tunnels in an encapsulation chain with action
|
||||||
|
## :bro:see:`Tunnel::DISCOVER` that aren't already in the
|
||||||
|
## :bro:id:`Tunnel::active` table and adds them if not.
|
||||||
|
global register_all: function(ecv: EncapsulatingConnVector);
|
||||||
|
|
||||||
|
## Logs a single tunnel "connection" with action
|
||||||
|
## :bro:see:`Tunnel::DISCOVER` if it's not already in the
|
||||||
|
## :bro:id:`Tunnel::active` table and adds it if not.
|
||||||
|
global register: function(ec: EncapsulatingConn);
|
||||||
|
|
||||||
|
## Logs a single tunnel "connection" with action
|
||||||
|
## :bro:see:`Tunnel::EXPIRE` and removes it from the
|
||||||
|
## :bro:id:`Tunnel::active` table.
|
||||||
|
##
|
||||||
|
## t: A table of tunnels.
|
||||||
|
##
|
||||||
|
## idx: The index of the tunnel table corresponding to the tunnel to expire.
|
||||||
|
##
|
||||||
|
## Returns: 0secs, which when this function is used as an
|
||||||
|
## :bro:attr:`&expire_func`, indicates to remove the element at
|
||||||
|
## *idx* immediately.
|
||||||
|
global expire: function(t: table[conn_id] of Info, idx: conn_id): interval;
|
||||||
|
|
||||||
|
## Removes a single tunnel from the :bro:id:`Tunnel::active` table
|
||||||
|
## and logs the closing/expiration of the tunnel.
|
||||||
|
##
|
||||||
|
## tunnel: The tunnel which has closed or expired.
|
||||||
|
##
|
||||||
|
## action: The specific reason for the tunnel ending.
|
||||||
|
global close: function(tunnel: Info, action: Action);
|
||||||
|
|
||||||
|
## The amount of time a tunnel is not used in establishment of new
|
||||||
|
## connections before it is considered inactive/expired.
|
||||||
|
const expiration_interval = 1hrs &redef;
|
||||||
|
|
||||||
|
## Currently active tunnels. That is, tunnels for which new, encapsulated
|
||||||
|
## connections have been seen in the interval indicated by
|
||||||
|
## :bro:see:`Tunnel::expiration_interval`.
|
||||||
|
global active: table[conn_id] of Info = table() &read_expire=expiration_interval &expire_func=expire;
|
||||||
|
}
|
||||||
|
|
||||||
|
const ayiya_ports = { 5072/udp };
|
||||||
|
redef dpd_config += { [ANALYZER_AYIYA] = [$ports = ayiya_ports] };
|
||||||
|
|
||||||
|
const teredo_ports = { 3544/udp };
|
||||||
|
redef dpd_config += { [ANALYZER_TEREDO] = [$ports = teredo_ports] };
|
||||||
|
|
||||||
|
redef likely_server_ports += { ayiya_ports, teredo_ports };
|
||||||
|
|
||||||
|
event bro_init() &priority=5
|
||||||
|
{
|
||||||
|
Log::create_stream(Tunnel::LOG, [$columns=Info]);
|
||||||
|
}
|
||||||
|
|
||||||
|
function register_all(ecv: EncapsulatingConnVector)
|
||||||
|
{
|
||||||
|
for ( i in ecv )
|
||||||
|
register(ecv[i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
function register(ec: EncapsulatingConn)
|
||||||
|
{
|
||||||
|
if ( ec$cid !in active )
|
||||||
|
{
|
||||||
|
local tunnel: Info;
|
||||||
|
tunnel$ts = network_time();
|
||||||
|
if ( ec?$uid )
|
||||||
|
tunnel$uid = ec$uid;
|
||||||
|
tunnel$id = ec$cid;
|
||||||
|
tunnel$action = DISCOVER;
|
||||||
|
tunnel$tunnel_type = ec$tunnel_type;
|
||||||
|
active[ec$cid] = tunnel;
|
||||||
|
Log::write(LOG, tunnel);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function close(tunnel: Info, action: Action)
|
||||||
|
{
|
||||||
|
tunnel$action = action;
|
||||||
|
tunnel$ts = network_time();
|
||||||
|
Log::write(LOG, tunnel);
|
||||||
|
delete active[tunnel$id];
|
||||||
|
}
|
||||||
|
|
||||||
|
function expire(t: table[conn_id] of Info, idx: conn_id): interval
|
||||||
|
{
|
||||||
|
close(t[idx], EXPIRE);
|
||||||
|
return 0secs;
|
||||||
|
}
|
||||||
|
|
||||||
|
event new_connection(c: connection) &priority=5
|
||||||
|
{
|
||||||
|
if ( c?$tunnel )
|
||||||
|
register_all(c$tunnel);
|
||||||
|
}
|
||||||
|
|
||||||
|
event tunnel_changed(c: connection, e: EncapsulatingConnVector) &priority=5
|
||||||
|
{
|
||||||
|
register_all(e);
|
||||||
|
}
|
||||||
|
|
||||||
|
event connection_state_remove(c: connection) &priority=-5
|
||||||
|
{
|
||||||
|
if ( c$id in active )
|
||||||
|
close(active[c$id], CLOSE);
|
||||||
|
}
|
|
@ -178,6 +178,32 @@ type endpoint_stats: record {
|
||||||
## use ``count``. That should be changed.
|
## use ``count``. That should be changed.
|
||||||
type AnalyzerID: count;
|
type AnalyzerID: count;
|
||||||
|
|
||||||
|
module Tunnel;
|
||||||
|
export {
|
||||||
|
## Records the identity of an encapsulating parent of a tunneled connection.
|
||||||
|
type EncapsulatingConn: record {
|
||||||
|
## The 4-tuple of the encapsulating "connection". In case of an IP-in-IP
|
||||||
|
## tunnel the ports will be set to 0. The direction (i.e., orig and
|
||||||
|
## resp) are set according to the first tunneled packet seen
|
||||||
|
## and not according to the side that established the tunnel.
|
||||||
|
cid: conn_id;
|
||||||
|
## The type of tunnel.
|
||||||
|
tunnel_type: Tunnel::Type;
|
||||||
|
## A globally unique identifier that, for non-IP-in-IP tunnels,
|
||||||
|
## cross-references the *uid* field of :bro:type:`connection`.
|
||||||
|
uid: string &optional;
|
||||||
|
} &log;
|
||||||
|
} # end export
|
||||||
|
module GLOBAL;
|
||||||
|
|
||||||
|
## A type alias for a vector of encapsulating "connections", i.e for when
|
||||||
|
## there are tunnels within tunnels.
|
||||||
|
##
|
||||||
|
## .. todo:: We need this type definition only for declaring builtin functions
|
||||||
|
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
|
||||||
|
## directly and then remove this alias.
|
||||||
|
type EncapsulatingConnVector: vector of Tunnel::EncapsulatingConn;
|
||||||
|
|
||||||
## Statistics about a :bro:type:`connection` endpoint.
|
## Statistics about a :bro:type:`connection` endpoint.
|
||||||
##
|
##
|
||||||
## .. bro:see:: connection
|
## .. bro:see:: connection
|
||||||
|
@ -199,10 +225,10 @@ type endpoint: record {
|
||||||
flow_label: count;
|
flow_label: count;
|
||||||
};
|
};
|
||||||
|
|
||||||
# A connection. This is Bro's basic connection type describing IP- and
|
## A connection. This is Bro's basic connection type describing IP- and
|
||||||
# transport-layer information about the conversation. Note that Bro uses a
|
## transport-layer information about the conversation. Note that Bro uses a
|
||||||
# liberal interpreation of "connection" and associates instances of this type
|
## liberal interpreation of "connection" and associates instances of this type
|
||||||
# also with UDP and ICMP flows.
|
## also with UDP and ICMP flows.
|
||||||
type connection: record {
|
type connection: record {
|
||||||
id: conn_id; ##< The connection's identifying 4-tuple.
|
id: conn_id; ##< The connection's identifying 4-tuple.
|
||||||
orig: endpoint; ##< Statistics about originator side.
|
orig: endpoint; ##< Statistics about originator side.
|
||||||
|
@ -227,6 +253,12 @@ type connection: record {
|
||||||
## that is very likely unique across independent Bro runs. These IDs can thus be
|
## that is very likely unique across independent Bro runs. These IDs can thus be
|
||||||
## used to tag and locate information associated with that connection.
|
## used to tag and locate information associated with that connection.
|
||||||
uid: string;
|
uid: string;
|
||||||
|
## If the connection is tunneled, this field contains information about
|
||||||
|
## the encapsulating "connection(s)" with the outermost one starting
|
||||||
|
## at index zero. It's also always the first such enapsulation seen
|
||||||
|
## for the connection unless the :bro:id:`tunnel_changed` event is handled
|
||||||
|
## and re-assigns this field to the new encapsulation.
|
||||||
|
tunnel: EncapsulatingConnVector &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
## Fields of a SYN packet.
|
## Fields of a SYN packet.
|
||||||
|
@ -884,18 +916,9 @@ const frag_timeout = 0.0 sec &redef;
|
||||||
const packet_sort_window = 0 usecs &redef;
|
const packet_sort_window = 0 usecs &redef;
|
||||||
|
|
||||||
## If positive, indicates the encapsulation header size that should
|
## If positive, indicates the encapsulation header size that should
|
||||||
## be skipped. This either applies to all packets, or if
|
## be skipped. This applies to all packets.
|
||||||
## :bro:see:`tunnel_port` is set, only to packets on that port.
|
|
||||||
##
|
|
||||||
## .. :bro:see:: tunnel_port
|
|
||||||
const encap_hdr_size = 0 &redef;
|
const encap_hdr_size = 0 &redef;
|
||||||
|
|
||||||
## A UDP port that specifies which connections to apply :bro:see:`encap_hdr_size`
|
|
||||||
## to.
|
|
||||||
##
|
|
||||||
## .. :bro:see:: encap_hdr_size
|
|
||||||
const tunnel_port = 0/udp &redef;
|
|
||||||
|
|
||||||
## Whether to use the ``ConnSize`` analyzer to count the number of packets and
|
## Whether to use the ``ConnSize`` analyzer to count the number of packets and
|
||||||
## IP-level bytes transfered by each endpoint. If true, these values are returned
|
## IP-level bytes transfered by each endpoint. If true, these values are returned
|
||||||
## in the connection's :bro:see:`endpoint` record value.
|
## in the connection's :bro:see:`endpoint` record value.
|
||||||
|
@ -1250,7 +1273,7 @@ type ip6_ext_hdr: record {
|
||||||
mobility: ip6_mobility_hdr &optional;
|
mobility: ip6_mobility_hdr &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
## A type alias for a vector of IPv6 extension headers
|
## A type alias for a vector of IPv6 extension headers.
|
||||||
type ip6_ext_hdr_chain: vector of ip6_ext_hdr;
|
type ip6_ext_hdr_chain: vector of ip6_ext_hdr;
|
||||||
|
|
||||||
## Values extracted from an IPv6 header.
|
## Values extracted from an IPv6 header.
|
||||||
|
@ -1336,6 +1359,42 @@ type pkt_hdr: record {
|
||||||
icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet.
|
icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet.
|
||||||
};
|
};
|
||||||
|
|
||||||
|
## A Teredo origin indication header. See :rfc:`4380` for more information
|
||||||
|
## about the Teredo protocol.
|
||||||
|
##
|
||||||
|
## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication
|
||||||
|
## teredo_hdr
|
||||||
|
type teredo_auth: record {
|
||||||
|
id: string; ##< Teredo client identifier.
|
||||||
|
value: string; ##< HMAC-SHA1 over shared secret key between client and
|
||||||
|
##< server, nonce, confirmation byte, origin indication
|
||||||
|
##< (if present), and the IPv6 packet.
|
||||||
|
nonce: count; ##< Nonce chosen by Teredo client to be repeated by
|
||||||
|
##< Teredo server.
|
||||||
|
confirm: count; ##< Confirmation byte to be set to 0 by Teredo client
|
||||||
|
##< and non-zero by server if client needs new key.
|
||||||
|
};
|
||||||
|
|
||||||
|
## A Teredo authentication header. See :rfc:`4380` for more information
|
||||||
|
## about the Teredo protocol.
|
||||||
|
##
|
||||||
|
## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication
|
||||||
|
## teredo_hdr
|
||||||
|
type teredo_origin: record {
|
||||||
|
p: port; ##< Unobfuscated UDP port of Teredo client.
|
||||||
|
a: addr; ##< Unobfuscated IPv4 address of Teredo client.
|
||||||
|
};
|
||||||
|
|
||||||
|
## A Teredo packet header. See :rfc:`4380` for more information about the
|
||||||
|
## Teredo protocol.
|
||||||
|
##
|
||||||
|
## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication
|
||||||
|
type teredo_hdr: record {
|
||||||
|
auth: teredo_auth &optional; ##< Teredo authentication header.
|
||||||
|
origin: teredo_origin &optional; ##< Teredo origin indication header.
|
||||||
|
hdr: pkt_hdr; ##< IPv6 and transport protocol headers.
|
||||||
|
};
|
||||||
|
|
||||||
## Definition of "secondary filters". A secondary filter is a BPF filter given as
|
## Definition of "secondary filters". A secondary filter is a BPF filter given as
|
||||||
## index in this table. For each such filter, the corresponding event is raised for
|
## index in this table. For each such filter, the corresponding event is raised for
|
||||||
## all matching packets.
|
## all matching packets.
|
||||||
|
@ -2343,6 +2402,17 @@ type bittorrent_benc_dir: table[string] of bittorrent_benc_value;
|
||||||
## bt_tracker_response_not_ok
|
## bt_tracker_response_not_ok
|
||||||
type bt_tracker_headers: table[string] of string;
|
type bt_tracker_headers: table[string] of string;
|
||||||
|
|
||||||
|
module SOCKS;
|
||||||
|
export {
|
||||||
|
## This record is for a SOCKS client or server to provide either a
|
||||||
|
## name or an address to represent a desired or established connection.
|
||||||
|
type Address: record {
|
||||||
|
host: addr &optional;
|
||||||
|
name: string &optional;
|
||||||
|
} &log;
|
||||||
|
}
|
||||||
|
module GLOBAL;
|
||||||
|
|
||||||
@load base/event.bif
|
@load base/event.bif
|
||||||
|
|
||||||
## BPF filter the user has set via the -f command line options. Empty if none.
|
## BPF filter the user has set via the -f command line options. Empty if none.
|
||||||
|
@ -2636,11 +2706,33 @@ const record_all_packets = F &redef;
|
||||||
## .. bro:see:: conn_stats
|
## .. bro:see:: conn_stats
|
||||||
const ignore_keep_alive_rexmit = F &redef;
|
const ignore_keep_alive_rexmit = F &redef;
|
||||||
|
|
||||||
## Whether the analysis engine parses IP packets encapsulated in
|
module Tunnel;
|
||||||
## UDP tunnels.
|
export {
|
||||||
##
|
## The maximum depth of a tunnel to decapsulate until giving up.
|
||||||
## .. bro:see:: tunnel_port
|
## Setting this to zero will disable all types of tunnel decapsulation.
|
||||||
const parse_udp_tunnels = F &redef;
|
const max_depth: count = 2 &redef;
|
||||||
|
|
||||||
|
## Toggle whether to do IPv{4,6}-in-IPv{4,6} decapsulation.
|
||||||
|
const enable_ip = T &redef;
|
||||||
|
|
||||||
|
## Toggle whether to do IPv{4,6}-in-AYIYA decapsulation.
|
||||||
|
const enable_ayiya = T &redef;
|
||||||
|
|
||||||
|
## Toggle whether to do IPv6-in-Teredo decapsulation.
|
||||||
|
const enable_teredo = T &redef;
|
||||||
|
|
||||||
|
## With this option set, the Teredo analysis will first check to see if
|
||||||
|
## other protocol analyzers have confirmed that they think they're
|
||||||
|
## parsing the right protocol and only continue with Teredo tunnel
|
||||||
|
## decapsulation if nothing else has yet confirmed. This can help
|
||||||
|
## reduce false positives of UDP traffic (e.g. DNS) that also happens
|
||||||
|
## to have a valid Teredo encapsulation.
|
||||||
|
const yielding_teredo_decapsulation = T &redef;
|
||||||
|
|
||||||
|
## How often to cleanup internal state for inactive IP tunnels.
|
||||||
|
const ip_tunnel_timeout = 24hrs &redef;
|
||||||
|
} # end export
|
||||||
|
module GLOBAL;
|
||||||
|
|
||||||
## Number of bytes per packet to capture from live interfaces.
|
## Number of bytes per packet to capture from live interfaces.
|
||||||
const snaplen = 8192 &redef;
|
const snaplen = 8192 &redef;
|
||||||
|
|
|
@ -29,6 +29,7 @@
|
||||||
@load base/frameworks/metrics
|
@load base/frameworks/metrics
|
||||||
@load base/frameworks/intel
|
@load base/frameworks/intel
|
||||||
@load base/frameworks/reporter
|
@load base/frameworks/reporter
|
||||||
|
@load base/frameworks/tunnels
|
||||||
|
|
||||||
@load base/protocols/conn
|
@load base/protocols/conn
|
||||||
@load base/protocols/dns
|
@load base/protocols/dns
|
||||||
|
@ -36,6 +37,7 @@
|
||||||
@load base/protocols/http
|
@load base/protocols/http
|
||||||
@load base/protocols/irc
|
@load base/protocols/irc
|
||||||
@load base/protocols/smtp
|
@load base/protocols/smtp
|
||||||
|
@load base/protocols/socks
|
||||||
@load base/protocols/ssh
|
@load base/protocols/ssh
|
||||||
@load base/protocols/ssl
|
@load base/protocols/ssl
|
||||||
@load base/protocols/syslog
|
@load base/protocols/syslog
|
||||||
|
|
|
@ -101,6 +101,10 @@ export {
|
||||||
resp_pkts: count &log &optional;
|
resp_pkts: count &log &optional;
|
||||||
## Number IP level bytes the responder sent. See ``orig_pkts``.
|
## Number IP level bytes the responder sent. See ``orig_pkts``.
|
||||||
resp_ip_bytes: count &log &optional;
|
resp_ip_bytes: count &log &optional;
|
||||||
|
## If this connection was over a tunnel, indicate the
|
||||||
|
## *uid* values for any encapsulating parent connections
|
||||||
|
## used over the lifetime of this inner connection.
|
||||||
|
tunnel_parents: set[string] &log;
|
||||||
};
|
};
|
||||||
|
|
||||||
## Event that can be handled to access the :bro:type:`Conn::Info`
|
## Event that can be handled to access the :bro:type:`Conn::Info`
|
||||||
|
@ -190,6 +194,8 @@ function set_conn(c: connection, eoc: bool)
|
||||||
c$conn$ts=c$start_time;
|
c$conn$ts=c$start_time;
|
||||||
c$conn$uid=c$uid;
|
c$conn$uid=c$uid;
|
||||||
c$conn$id=c$id;
|
c$conn$id=c$id;
|
||||||
|
if ( c?$tunnel && |c$tunnel| > 0 )
|
||||||
|
add c$conn$tunnel_parents[c$tunnel[|c$tunnel|-1]$uid];
|
||||||
c$conn$proto=get_port_transport_proto(c$id$resp_p);
|
c$conn$proto=get_port_transport_proto(c$id$resp_p);
|
||||||
if( |Site::local_nets| > 0 )
|
if( |Site::local_nets| > 0 )
|
||||||
c$conn$local_orig=Site::is_local_addr(c$id$orig_h);
|
c$conn$local_orig=Site::is_local_addr(c$id$orig_h);
|
||||||
|
@ -228,6 +234,14 @@ event content_gap(c: connection, is_orig: bool, seq: count, length: count) &prio
|
||||||
c$conn$missed_bytes = c$conn$missed_bytes + length;
|
c$conn$missed_bytes = c$conn$missed_bytes + length;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
event tunnel_changed(c: connection, e: EncapsulatingConnVector) &priority=5
|
||||||
|
{
|
||||||
|
set_conn(c, F);
|
||||||
|
if ( |e| > 0 )
|
||||||
|
add c$conn$tunnel_parents[e[|e|-1]$uid];
|
||||||
|
c$tunnel = e;
|
||||||
|
}
|
||||||
|
|
||||||
event connection_state_remove(c: connection) &priority=5
|
event connection_state_remove(c: connection) &priority=5
|
||||||
{
|
{
|
||||||
set_conn(c, T);
|
set_conn(c, T);
|
||||||
|
|
2
scripts/base/protocols/socks/__load__.bro
Normal file
2
scripts/base/protocols/socks/__load__.bro
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
@load ./consts
|
||||||
|
@load ./main
|
40
scripts/base/protocols/socks/consts.bro
Normal file
40
scripts/base/protocols/socks/consts.bro
Normal file
|
@ -0,0 +1,40 @@
|
||||||
|
module SOCKS;
|
||||||
|
|
||||||
|
export {
|
||||||
|
type RequestType: enum {
|
||||||
|
CONNECTION = 1,
|
||||||
|
PORT = 2,
|
||||||
|
UDP_ASSOCIATE = 3,
|
||||||
|
};
|
||||||
|
|
||||||
|
const v5_authentication_methods: table[count] of string = {
|
||||||
|
[0] = "No Authentication Required",
|
||||||
|
[1] = "GSSAPI",
|
||||||
|
[2] = "Username/Password",
|
||||||
|
[3] = "Challenge-Handshake Authentication Protocol",
|
||||||
|
[5] = "Challenge-Response Authentication Method",
|
||||||
|
[6] = "Secure Sockets Layer",
|
||||||
|
[7] = "NDS Authentication",
|
||||||
|
[8] = "Multi-Authentication Framework",
|
||||||
|
[255] = "No Acceptable Methods",
|
||||||
|
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||||
|
|
||||||
|
const v4_status: table[count] of string = {
|
||||||
|
[0x5a] = "succeeded",
|
||||||
|
[0x5b] = "general SOCKS server failure",
|
||||||
|
[0x5c] = "request failed because client is not running identd",
|
||||||
|
[0x5d] = "request failed because client's identd could not confirm the user ID string in the request",
|
||||||
|
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||||
|
|
||||||
|
const v5_status: table[count] of string = {
|
||||||
|
[0] = "succeeded",
|
||||||
|
[1] = "general SOCKS server failure",
|
||||||
|
[2] = "connection not allowed by ruleset",
|
||||||
|
[3] = "Network unreachable",
|
||||||
|
[4] = "Host unreachable",
|
||||||
|
[5] = "Connection refused",
|
||||||
|
[6] = "TTL expired",
|
||||||
|
[7] = "Command not supported",
|
||||||
|
[8] = "Address type not supported",
|
||||||
|
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||||
|
}
|
87
scripts/base/protocols/socks/main.bro
Normal file
87
scripts/base/protocols/socks/main.bro
Normal file
|
@ -0,0 +1,87 @@
|
||||||
|
@load base/frameworks/tunnels
|
||||||
|
@load ./consts
|
||||||
|
|
||||||
|
module SOCKS;
|
||||||
|
|
||||||
|
export {
|
||||||
|
redef enum Log::ID += { LOG };
|
||||||
|
|
||||||
|
type Info: record {
|
||||||
|
## Time when the proxy connection was first detected.
|
||||||
|
ts: time &log;
|
||||||
|
uid: string &log;
|
||||||
|
id: conn_id &log;
|
||||||
|
## Protocol version of SOCKS.
|
||||||
|
version: count &log;
|
||||||
|
## Username for the proxy if extracted from the network.
|
||||||
|
user: string &log &optional;
|
||||||
|
## Server status for the attempt at using the proxy.
|
||||||
|
status: string &log &optional;
|
||||||
|
## Client requested SOCKS address. Could be an address, a name or both.
|
||||||
|
request: SOCKS::Address &log &optional;
|
||||||
|
## Client requested port.
|
||||||
|
request_p: port &log &optional;
|
||||||
|
## Server bound address. Could be an address, a name or both.
|
||||||
|
bound: SOCKS::Address &log &optional;
|
||||||
|
## Server bound port.
|
||||||
|
bound_p: port &log &optional;
|
||||||
|
};
|
||||||
|
|
||||||
|
## Event that can be handled to access the SOCKS
|
||||||
|
## record as it is sent on to the logging framework.
|
||||||
|
global log_socks: event(rec: Info);
|
||||||
|
}
|
||||||
|
|
||||||
|
event bro_init() &priority=5
|
||||||
|
{
|
||||||
|
Log::create_stream(SOCKS::LOG, [$columns=Info, $ev=log_socks]);
|
||||||
|
}
|
||||||
|
|
||||||
|
redef record connection += {
|
||||||
|
socks: SOCKS::Info &optional;
|
||||||
|
};
|
||||||
|
|
||||||
|
# Configure DPD
|
||||||
|
redef capture_filters += { ["socks"] = "tcp port 1080" };
|
||||||
|
redef dpd_config += { [ANALYZER_SOCKS] = [$ports = set(1080/tcp)] };
|
||||||
|
redef likely_server_ports += { 1080/tcp };
|
||||||
|
|
||||||
|
function set_session(c: connection, version: count)
|
||||||
|
{
|
||||||
|
if ( ! c?$socks )
|
||||||
|
c$socks = [$ts=network_time(), $id=c$id, $uid=c$uid, $version=version];
|
||||||
|
}
|
||||||
|
|
||||||
|
event socks_request(c: connection, version: count, request_type: count,
|
||||||
|
sa: SOCKS::Address, p: port, user: string) &priority=5
|
||||||
|
{
|
||||||
|
set_session(c, version);
|
||||||
|
|
||||||
|
c$socks$request = sa;
|
||||||
|
c$socks$request_p = p;
|
||||||
|
|
||||||
|
# Copy this conn_id and set the orig_p to zero because in the case of SOCKS proxies there will
|
||||||
|
# be potentially many source ports since a new proxy connection is established for each
|
||||||
|
# proxied connection. We treat this as a singular "tunnel".
|
||||||
|
local cid = copy(c$id);
|
||||||
|
cid$orig_p = 0/tcp;
|
||||||
|
Tunnel::register([$cid=cid, $tunnel_type=Tunnel::SOCKS, $payload_proxy=T]);
|
||||||
|
}
|
||||||
|
|
||||||
|
event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=5
|
||||||
|
{
|
||||||
|
set_session(c, version);
|
||||||
|
|
||||||
|
if ( version == 5 )
|
||||||
|
c$socks$status = v5_status[reply];
|
||||||
|
else if ( version == 4 )
|
||||||
|
c$socks$status = v4_status[reply];
|
||||||
|
|
||||||
|
c$socks$bound = sa;
|
||||||
|
c$socks$bound_p = p;
|
||||||
|
}
|
||||||
|
|
||||||
|
event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=-5
|
||||||
|
{
|
||||||
|
Log::write(SOCKS::LOG, c$socks);
|
||||||
|
}
|
24
src/AYIYA.cc
Normal file
24
src/AYIYA.cc
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
#include "AYIYA.h"
|
||||||
|
|
||||||
|
AYIYA_Analyzer::AYIYA_Analyzer(Connection* conn)
|
||||||
|
: Analyzer(AnalyzerTag::AYIYA, conn)
|
||||||
|
{
|
||||||
|
interp = new binpac::AYIYA::AYIYA_Conn(this);
|
||||||
|
}
|
||||||
|
|
||||||
|
AYIYA_Analyzer::~AYIYA_Analyzer()
|
||||||
|
{
|
||||||
|
delete interp;
|
||||||
|
}
|
||||||
|
|
||||||
|
void AYIYA_Analyzer::Done()
|
||||||
|
{
|
||||||
|
Analyzer::Done();
|
||||||
|
Event(udp_session_done);
|
||||||
|
}
|
||||||
|
|
||||||
|
void AYIYA_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, int seq, const IP_Hdr* ip, int caplen)
|
||||||
|
{
|
||||||
|
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
|
||||||
|
interp->NewData(orig, data, data + len);
|
||||||
|
}
|
29
src/AYIYA.h
Normal file
29
src/AYIYA.h
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
#ifndef AYIYA_h
|
||||||
|
#define AYIYA_h
|
||||||
|
|
||||||
|
#include "ayiya_pac.h"
|
||||||
|
|
||||||
|
class AYIYA_Analyzer : public Analyzer {
|
||||||
|
public:
|
||||||
|
AYIYA_Analyzer(Connection* conn);
|
||||||
|
virtual ~AYIYA_Analyzer();
|
||||||
|
|
||||||
|
virtual void Done();
|
||||||
|
virtual void DeliverPacket(int len, const u_char* data, bool orig,
|
||||||
|
int seq, const IP_Hdr* ip, int caplen);
|
||||||
|
|
||||||
|
static Analyzer* InstantiateAnalyzer(Connection* conn)
|
||||||
|
{ return new AYIYA_Analyzer(conn); }
|
||||||
|
|
||||||
|
static bool Available()
|
||||||
|
{ return BifConst::Tunnel::enable_ayiya &&
|
||||||
|
BifConst::Tunnel::max_depth > 0; }
|
||||||
|
|
||||||
|
protected:
|
||||||
|
friend class AnalyzerTimer;
|
||||||
|
void ExpireTimer(double t);
|
||||||
|
|
||||||
|
binpac::AYIYA::AYIYA_Conn* interp;
|
||||||
|
};
|
||||||
|
|
||||||
|
#endif
|
|
@ -4,6 +4,7 @@
|
||||||
#include "PIA.h"
|
#include "PIA.h"
|
||||||
#include "Event.h"
|
#include "Event.h"
|
||||||
|
|
||||||
|
#include "AYIYA.h"
|
||||||
#include "BackDoor.h"
|
#include "BackDoor.h"
|
||||||
#include "BitTorrent.h"
|
#include "BitTorrent.h"
|
||||||
#include "BitTorrentTracker.h"
|
#include "BitTorrentTracker.h"
|
||||||
|
@ -33,9 +34,11 @@
|
||||||
#include "NFS.h"
|
#include "NFS.h"
|
||||||
#include "Portmap.h"
|
#include "Portmap.h"
|
||||||
#include "POP3.h"
|
#include "POP3.h"
|
||||||
|
#include "SOCKS.h"
|
||||||
#include "SSH.h"
|
#include "SSH.h"
|
||||||
#include "SSL.h"
|
#include "SSL.h"
|
||||||
#include "Syslog-binpac.h"
|
#include "Syslog-binpac.h"
|
||||||
|
#include "Teredo.h"
|
||||||
#include "ConnSizeAnalyzer.h"
|
#include "ConnSizeAnalyzer.h"
|
||||||
|
|
||||||
// Keep same order here as in AnalyzerTag definition!
|
// Keep same order here as in AnalyzerTag definition!
|
||||||
|
@ -127,6 +130,16 @@ const Analyzer::Config Analyzer::analyzer_configs[] = {
|
||||||
Syslog_Analyzer_binpac::InstantiateAnalyzer,
|
Syslog_Analyzer_binpac::InstantiateAnalyzer,
|
||||||
Syslog_Analyzer_binpac::Available, 0, false },
|
Syslog_Analyzer_binpac::Available, 0, false },
|
||||||
|
|
||||||
|
{ AnalyzerTag::AYIYA, "AYIYA",
|
||||||
|
AYIYA_Analyzer::InstantiateAnalyzer,
|
||||||
|
AYIYA_Analyzer::Available, 0, false },
|
||||||
|
{ AnalyzerTag::SOCKS, "SOCKS",
|
||||||
|
SOCKS_Analyzer::InstantiateAnalyzer,
|
||||||
|
SOCKS_Analyzer::Available, 0, false },
|
||||||
|
{ AnalyzerTag::Teredo, "TEREDO",
|
||||||
|
Teredo_Analyzer::InstantiateAnalyzer,
|
||||||
|
Teredo_Analyzer::Available, 0, false },
|
||||||
|
|
||||||
{ AnalyzerTag::File, "FILE", File_Analyzer::InstantiateAnalyzer,
|
{ AnalyzerTag::File, "FILE", File_Analyzer::InstantiateAnalyzer,
|
||||||
File_Analyzer::Available, 0, false },
|
File_Analyzer::Available, 0, false },
|
||||||
{ AnalyzerTag::Backdoor, "BACKDOOR",
|
{ AnalyzerTag::Backdoor, "BACKDOOR",
|
||||||
|
|
|
@ -215,6 +215,11 @@ public:
|
||||||
// analyzer, even if the method is called multiple times.
|
// analyzer, even if the method is called multiple times.
|
||||||
virtual void ProtocolConfirmation();
|
virtual void ProtocolConfirmation();
|
||||||
|
|
||||||
|
// Return whether the analyzer previously called ProtocolConfirmation()
|
||||||
|
// at least once before.
|
||||||
|
bool ProtocolConfirmed() const
|
||||||
|
{ return protocol_confirmed; }
|
||||||
|
|
||||||
// Report that we found a significant protocol violation which might
|
// Report that we found a significant protocol violation which might
|
||||||
// indicate that the analyzed data is in fact not the expected
|
// indicate that the analyzed data is in fact not the expected
|
||||||
// protocol. The protocol_violation event is raised once per call to
|
// protocol. The protocol_violation event is raised once per call to
|
||||||
|
@ -338,6 +343,10 @@ private:
|
||||||
for ( analyzer_list::iterator var = the_kids.begin(); \
|
for ( analyzer_list::iterator var = the_kids.begin(); \
|
||||||
var != the_kids.end(); var++ )
|
var != the_kids.end(); var++ )
|
||||||
|
|
||||||
|
#define LOOP_OVER_GIVEN_CONST_CHILDREN(var, the_kids) \
|
||||||
|
for ( analyzer_list::const_iterator var = the_kids.begin(); \
|
||||||
|
var != the_kids.end(); var++ )
|
||||||
|
|
||||||
class SupportAnalyzer : public Analyzer {
|
class SupportAnalyzer : public Analyzer {
|
||||||
public:
|
public:
|
||||||
SupportAnalyzer(AnalyzerTag::Tag tag, Connection* conn, bool arg_orig)
|
SupportAnalyzer(AnalyzerTag::Tag tag, Connection* conn, bool arg_orig)
|
||||||
|
|
|
@ -33,11 +33,15 @@ namespace AnalyzerTag {
|
||||||
DHCP_BINPAC, DNS_TCP_BINPAC, DNS_UDP_BINPAC,
|
DHCP_BINPAC, DNS_TCP_BINPAC, DNS_UDP_BINPAC,
|
||||||
HTTP_BINPAC, SSL, SYSLOG_BINPAC,
|
HTTP_BINPAC, SSL, SYSLOG_BINPAC,
|
||||||
|
|
||||||
|
// Decapsulation analyzers.
|
||||||
|
AYIYA,
|
||||||
|
SOCKS,
|
||||||
|
Teredo,
|
||||||
|
|
||||||
// Other
|
// Other
|
||||||
File, Backdoor, InterConn, SteppingStone, TCPStats,
|
File, Backdoor, InterConn, SteppingStone, TCPStats,
|
||||||
ConnSize,
|
ConnSize,
|
||||||
|
|
||||||
|
|
||||||
// Support-analyzers
|
// Support-analyzers
|
||||||
Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP,
|
Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP,
|
||||||
Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh,
|
Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh,
|
||||||
|
|
|
@ -187,6 +187,9 @@ endmacro(BINPAC_TARGET)
|
||||||
|
|
||||||
binpac_target(binpac-lib.pac)
|
binpac_target(binpac-lib.pac)
|
||||||
binpac_target(binpac_bro-lib.pac)
|
binpac_target(binpac_bro-lib.pac)
|
||||||
|
|
||||||
|
binpac_target(ayiya.pac
|
||||||
|
ayiya-protocol.pac ayiya-analyzer.pac)
|
||||||
binpac_target(bittorrent.pac
|
binpac_target(bittorrent.pac
|
||||||
bittorrent-protocol.pac bittorrent-analyzer.pac)
|
bittorrent-protocol.pac bittorrent-analyzer.pac)
|
||||||
binpac_target(dce_rpc.pac
|
binpac_target(dce_rpc.pac
|
||||||
|
@ -206,6 +209,8 @@ binpac_target(netflow.pac
|
||||||
netflow-protocol.pac netflow-analyzer.pac)
|
netflow-protocol.pac netflow-analyzer.pac)
|
||||||
binpac_target(smb.pac
|
binpac_target(smb.pac
|
||||||
smb-protocol.pac smb-pipe.pac smb-mailslot.pac)
|
smb-protocol.pac smb-pipe.pac smb-mailslot.pac)
|
||||||
|
binpac_target(socks.pac
|
||||||
|
socks-protocol.pac socks-analyzer.pac)
|
||||||
binpac_target(ssl.pac
|
binpac_target(ssl.pac
|
||||||
ssl-defs.pac ssl-protocol.pac ssl-analyzer.pac)
|
ssl-defs.pac ssl-protocol.pac ssl-analyzer.pac)
|
||||||
binpac_target(syslog.pac
|
binpac_target(syslog.pac
|
||||||
|
@ -277,6 +282,7 @@ set(bro_SRCS
|
||||||
Anon.cc
|
Anon.cc
|
||||||
ARP.cc
|
ARP.cc
|
||||||
Attr.cc
|
Attr.cc
|
||||||
|
AYIYA.cc
|
||||||
BackDoor.cc
|
BackDoor.cc
|
||||||
Base64.cc
|
Base64.cc
|
||||||
BitTorrent.cc
|
BitTorrent.cc
|
||||||
|
@ -375,6 +381,7 @@ set(bro_SRCS
|
||||||
SmithWaterman.cc
|
SmithWaterman.cc
|
||||||
SMB.cc
|
SMB.cc
|
||||||
SMTP.cc
|
SMTP.cc
|
||||||
|
SOCKS.cc
|
||||||
SSH.cc
|
SSH.cc
|
||||||
SSL.cc
|
SSL.cc
|
||||||
Scope.cc
|
Scope.cc
|
||||||
|
@ -391,9 +398,11 @@ set(bro_SRCS
|
||||||
TCP_Endpoint.cc
|
TCP_Endpoint.cc
|
||||||
TCP_Reassembler.cc
|
TCP_Reassembler.cc
|
||||||
Telnet.cc
|
Telnet.cc
|
||||||
|
Teredo.cc
|
||||||
Timer.cc
|
Timer.cc
|
||||||
Traverse.cc
|
Traverse.cc
|
||||||
Trigger.cc
|
Trigger.cc
|
||||||
|
TunnelEncapsulation.cc
|
||||||
Type.cc
|
Type.cc
|
||||||
UDP.cc
|
UDP.cc
|
||||||
Val.cc
|
Val.cc
|
||||||
|
|
39
src/Conn.cc
39
src/Conn.cc
|
@ -13,6 +13,7 @@
|
||||||
#include "Timer.h"
|
#include "Timer.h"
|
||||||
#include "PIA.h"
|
#include "PIA.h"
|
||||||
#include "binpac.h"
|
#include "binpac.h"
|
||||||
|
#include "TunnelEncapsulation.h"
|
||||||
|
|
||||||
void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer,
|
void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer,
|
||||||
int arg_do_expire)
|
int arg_do_expire)
|
||||||
|
@ -112,7 +113,7 @@ unsigned int Connection::external_connections = 0;
|
||||||
IMPLEMENT_SERIAL(Connection, SER_CONNECTION);
|
IMPLEMENT_SERIAL(Connection, SER_CONNECTION);
|
||||||
|
|
||||||
Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
|
Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
|
||||||
uint32 flow)
|
uint32 flow, const EncapsulationStack* arg_encap)
|
||||||
{
|
{
|
||||||
sessions = s;
|
sessions = s;
|
||||||
key = k;
|
key = k;
|
||||||
|
@ -160,6 +161,11 @@ Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
|
||||||
|
|
||||||
uid = 0; // Will set later.
|
uid = 0; // Will set later.
|
||||||
|
|
||||||
|
if ( arg_encap )
|
||||||
|
encapsulation = new EncapsulationStack(*arg_encap);
|
||||||
|
else
|
||||||
|
encapsulation = 0;
|
||||||
|
|
||||||
if ( conn_timer_mgr )
|
if ( conn_timer_mgr )
|
||||||
{
|
{
|
||||||
++external_connections;
|
++external_connections;
|
||||||
|
@ -187,12 +193,40 @@ Connection::~Connection()
|
||||||
delete key;
|
delete key;
|
||||||
delete root_analyzer;
|
delete root_analyzer;
|
||||||
delete conn_timer_mgr;
|
delete conn_timer_mgr;
|
||||||
|
delete encapsulation;
|
||||||
|
|
||||||
--current_connections;
|
--current_connections;
|
||||||
if ( conn_timer_mgr )
|
if ( conn_timer_mgr )
|
||||||
--external_connections;
|
--external_connections;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void Connection::CheckEncapsulation(const EncapsulationStack* arg_encap)
|
||||||
|
{
|
||||||
|
if ( encapsulation && arg_encap )
|
||||||
|
{
|
||||||
|
if ( *encapsulation != *arg_encap )
|
||||||
|
{
|
||||||
|
Event(tunnel_changed, 0, arg_encap->GetVectorVal());
|
||||||
|
delete encapsulation;
|
||||||
|
encapsulation = new EncapsulationStack(*arg_encap);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
else if ( encapsulation )
|
||||||
|
{
|
||||||
|
EncapsulationStack empty;
|
||||||
|
Event(tunnel_changed, 0, empty.GetVectorVal());
|
||||||
|
delete encapsulation;
|
||||||
|
encapsulation = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
else if ( arg_encap )
|
||||||
|
{
|
||||||
|
Event(tunnel_changed, 0, arg_encap->GetVectorVal());
|
||||||
|
encapsulation = new EncapsulationStack(*arg_encap);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
void Connection::Done()
|
void Connection::Done()
|
||||||
{
|
{
|
||||||
finished = 1;
|
finished = 1;
|
||||||
|
@ -349,6 +383,9 @@ RecordVal* Connection::BuildConnVal()
|
||||||
|
|
||||||
char tmp[20];
|
char tmp[20];
|
||||||
conn_val->Assign(9, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62)));
|
conn_val->Assign(9, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62)));
|
||||||
|
|
||||||
|
if ( encapsulation && encapsulation->Depth() > 0 )
|
||||||
|
conn_val->Assign(10, encapsulation->GetVectorVal());
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( root_analyzer )
|
if ( root_analyzer )
|
||||||
|
|
16
src/Conn.h
16
src/Conn.h
|
@ -13,6 +13,7 @@
|
||||||
#include "RuleMatcher.h"
|
#include "RuleMatcher.h"
|
||||||
#include "AnalyzerTags.h"
|
#include "AnalyzerTags.h"
|
||||||
#include "IPAddr.h"
|
#include "IPAddr.h"
|
||||||
|
#include "TunnelEncapsulation.h"
|
||||||
|
|
||||||
class Connection;
|
class Connection;
|
||||||
class ConnectionTimer;
|
class ConnectionTimer;
|
||||||
|
@ -51,9 +52,16 @@ class Analyzer;
|
||||||
class Connection : public BroObj {
|
class Connection : public BroObj {
|
||||||
public:
|
public:
|
||||||
Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
|
Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
|
||||||
uint32 flow);
|
uint32 flow, const EncapsulationStack* arg_encap);
|
||||||
virtual ~Connection();
|
virtual ~Connection();
|
||||||
|
|
||||||
|
// Invoked when an encapsulation is discovered. It records the
|
||||||
|
// encapsulation with the connection and raises a "tunnel_changed"
|
||||||
|
// event if it's different from the previous encapsulation (or the
|
||||||
|
// first encountered). encap can be null to indicate no
|
||||||
|
// encapsulation.
|
||||||
|
void CheckEncapsulation(const EncapsulationStack* encap);
|
||||||
|
|
||||||
// Invoked when connection is about to be removed. Use Ref(this)
|
// Invoked when connection is about to be removed. Use Ref(this)
|
||||||
// inside Done to keep the connection object around (though it'll
|
// inside Done to keep the connection object around (though it'll
|
||||||
// no longer be accessible from the dictionary of active
|
// no longer be accessible from the dictionary of active
|
||||||
|
@ -242,6 +250,11 @@ public:
|
||||||
|
|
||||||
void SetUID(uint64 arg_uid) { uid = arg_uid; }
|
void SetUID(uint64 arg_uid) { uid = arg_uid; }
|
||||||
|
|
||||||
|
uint64 GetUID() const { return uid; }
|
||||||
|
|
||||||
|
const EncapsulationStack* GetEncapsulation() const
|
||||||
|
{ return encapsulation; }
|
||||||
|
|
||||||
void CheckFlowLabel(bool is_orig, uint32 flow_label);
|
void CheckFlowLabel(bool is_orig, uint32 flow_label);
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
|
@ -279,6 +292,7 @@ protected:
|
||||||
double inactivity_timeout;
|
double inactivity_timeout;
|
||||||
RecordVal* conn_val;
|
RecordVal* conn_val;
|
||||||
LoginConn* login_conn; // either nil, or this
|
LoginConn* login_conn; // either nil, or this
|
||||||
|
const EncapsulationStack* encapsulation; // tunnels
|
||||||
int suppress_event; // suppress certain events to once per conn.
|
int suppress_event; // suppress certain events to once per conn.
|
||||||
|
|
||||||
unsigned int installed_status_timer:1;
|
unsigned int installed_status_timer:1;
|
||||||
|
|
|
@ -572,8 +572,9 @@ void BroFile::InstallRotateTimer()
|
||||||
const char* base_time = log_rotate_base_time ?
|
const char* base_time = log_rotate_base_time ?
|
||||||
log_rotate_base_time->AsString()->CheckString() : 0;
|
log_rotate_base_time->AsString()->CheckString() : 0;
|
||||||
|
|
||||||
|
double base = parse_rotate_base_time(base_time);
|
||||||
double delta_t =
|
double delta_t =
|
||||||
calc_next_rotate(rotate_interval, base_time);
|
calc_next_rotate(network_time, rotate_interval, base);
|
||||||
rotate_timer = new RotateTimer(network_time + delta_t,
|
rotate_timer = new RotateTimer(network_time + delta_t,
|
||||||
this, true);
|
this, true);
|
||||||
}
|
}
|
||||||
|
|
10
src/Func.cc
10
src/Func.cc
|
@ -329,7 +329,17 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
|
||||||
bodies[i].stmts->GetLocationInfo());
|
bodies[i].stmts->GetLocationInfo());
|
||||||
|
|
||||||
Unref(result);
|
Unref(result);
|
||||||
|
|
||||||
|
try
|
||||||
|
{
|
||||||
result = bodies[i].stmts->Exec(f, flow);
|
result = bodies[i].stmts->Exec(f, flow);
|
||||||
|
}
|
||||||
|
|
||||||
|
catch ( InterpreterException& e )
|
||||||
|
{
|
||||||
|
// Already reported, but we continue exec'ing remaining bodies.
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
if ( f->HasDelayed() )
|
if ( f->HasDelayed() )
|
||||||
{
|
{
|
||||||
|
|
|
@ -64,7 +64,8 @@ void ICMP_Analyzer::DeliverPacket(int len, const u_char* data,
|
||||||
break;
|
break;
|
||||||
|
|
||||||
default:
|
default:
|
||||||
reporter->InternalError("unexpected IP proto in ICMP analyzer");
|
reporter->InternalError("unexpected IP proto in ICMP analyzer: %d",
|
||||||
|
ip->NextProto());
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -31,7 +31,6 @@ int tcp_SYN_ack_ok;
|
||||||
int tcp_match_undelivered;
|
int tcp_match_undelivered;
|
||||||
|
|
||||||
int encap_hdr_size;
|
int encap_hdr_size;
|
||||||
int udp_tunnel_port;
|
|
||||||
|
|
||||||
double frag_timeout;
|
double frag_timeout;
|
||||||
|
|
||||||
|
@ -49,6 +48,8 @@ int tcp_excessive_data_without_further_acks;
|
||||||
|
|
||||||
RecordType* x509_type;
|
RecordType* x509_type;
|
||||||
|
|
||||||
|
RecordType* socks_address;
|
||||||
|
|
||||||
double non_analyzed_lifetime;
|
double non_analyzed_lifetime;
|
||||||
double tcp_inactivity_timeout;
|
double tcp_inactivity_timeout;
|
||||||
double udp_inactivity_timeout;
|
double udp_inactivity_timeout;
|
||||||
|
@ -328,8 +329,6 @@ void init_net_var()
|
||||||
|
|
||||||
encap_hdr_size = opt_internal_int("encap_hdr_size");
|
encap_hdr_size = opt_internal_int("encap_hdr_size");
|
||||||
|
|
||||||
udp_tunnel_port = opt_internal_int("udp_tunnel_port") & ~UDP_PORT_MASK;
|
|
||||||
|
|
||||||
frag_timeout = opt_internal_double("frag_timeout");
|
frag_timeout = opt_internal_double("frag_timeout");
|
||||||
|
|
||||||
tcp_SYN_timeout = opt_internal_double("tcp_SYN_timeout");
|
tcp_SYN_timeout = opt_internal_double("tcp_SYN_timeout");
|
||||||
|
@ -348,6 +347,8 @@ void init_net_var()
|
||||||
|
|
||||||
x509_type = internal_type("X509")->AsRecordType();
|
x509_type = internal_type("X509")->AsRecordType();
|
||||||
|
|
||||||
|
socks_address = internal_type("SOCKS::Address")->AsRecordType();
|
||||||
|
|
||||||
non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime");
|
non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime");
|
||||||
tcp_inactivity_timeout = opt_internal_double("tcp_inactivity_timeout");
|
tcp_inactivity_timeout = opt_internal_double("tcp_inactivity_timeout");
|
||||||
udp_inactivity_timeout = opt_internal_double("udp_inactivity_timeout");
|
udp_inactivity_timeout = opt_internal_double("udp_inactivity_timeout");
|
||||||
|
|
|
@ -34,7 +34,6 @@ extern int tcp_SYN_ack_ok;
|
||||||
extern int tcp_match_undelivered;
|
extern int tcp_match_undelivered;
|
||||||
|
|
||||||
extern int encap_hdr_size;
|
extern int encap_hdr_size;
|
||||||
extern int udp_tunnel_port;
|
|
||||||
|
|
||||||
extern double frag_timeout;
|
extern double frag_timeout;
|
||||||
|
|
||||||
|
@ -52,6 +51,8 @@ extern int tcp_excessive_data_without_further_acks;
|
||||||
|
|
||||||
extern RecordType* x509_type;
|
extern RecordType* x509_type;
|
||||||
|
|
||||||
|
extern RecordType* socks_address;
|
||||||
|
|
||||||
extern double non_analyzed_lifetime;
|
extern double non_analyzed_lifetime;
|
||||||
extern double tcp_inactivity_timeout;
|
extern double tcp_inactivity_timeout;
|
||||||
extern double udp_inactivity_timeout;
|
extern double udp_inactivity_timeout;
|
||||||
|
|
|
@ -193,7 +193,18 @@ void PktSrc::Process()
|
||||||
{
|
{
|
||||||
protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0];
|
protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0];
|
||||||
|
|
||||||
if ( protocol != AF_INET && protocol != AF_INET6 )
|
// From the Wireshark Wiki: "AF_INET6, unfortunately, has
|
||||||
|
// different values in {NetBSD,OpenBSD,BSD/OS},
|
||||||
|
// {FreeBSD,DragonFlyBSD}, and {Darwin/Mac OS X}, so an IPv6
|
||||||
|
// packet might have a link-layer header with 24, 28, or 30
|
||||||
|
// as the AF_ value." As we may be reading traces captured on
|
||||||
|
// platforms other than what we're running on, we accept them
|
||||||
|
// all here.
|
||||||
|
if ( protocol != AF_INET
|
||||||
|
&& protocol != AF_INET6
|
||||||
|
&& protocol != 24
|
||||||
|
&& protocol != 28
|
||||||
|
&& protocol != 30 )
|
||||||
{
|
{
|
||||||
sessions->Weird("non_ip_packet_in_null_transport", &hdr, data);
|
sessions->Weird("non_ip_packet_in_null_transport", &hdr, data);
|
||||||
data = 0;
|
data = 0;
|
||||||
|
|
|
@ -2503,17 +2503,17 @@ bool RemoteSerializer::ProcessRemotePrint()
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool RemoteSerializer::SendLogCreateWriter(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields)
|
bool RemoteSerializer::SendLogCreateWriter(EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields)
|
||||||
{
|
{
|
||||||
loop_over_list(peers, i)
|
loop_over_list(peers, i)
|
||||||
{
|
{
|
||||||
SendLogCreateWriter(peers[i]->id, id, writer, path, num_fields, fields);
|
SendLogCreateWriter(peers[i]->id, id, writer, info, num_fields, fields);
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields)
|
bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields)
|
||||||
{
|
{
|
||||||
SetErrorDescr("logging");
|
SetErrorDescr("logging");
|
||||||
|
|
||||||
|
@ -2535,8 +2535,8 @@ bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal*
|
||||||
|
|
||||||
bool success = fmt.Write(id->AsEnum(), "id") &&
|
bool success = fmt.Write(id->AsEnum(), "id") &&
|
||||||
fmt.Write(writer->AsEnum(), "writer") &&
|
fmt.Write(writer->AsEnum(), "writer") &&
|
||||||
fmt.Write(path, "path") &&
|
fmt.Write(num_fields, "num_fields") &&
|
||||||
fmt.Write(num_fields, "num_fields");
|
info.Write(&fmt);
|
||||||
|
|
||||||
if ( ! success )
|
if ( ! success )
|
||||||
goto error;
|
goto error;
|
||||||
|
@ -2691,13 +2691,13 @@ bool RemoteSerializer::ProcessLogCreateWriter()
|
||||||
fmt.StartRead(current_args->data, current_args->len);
|
fmt.StartRead(current_args->data, current_args->len);
|
||||||
|
|
||||||
int id, writer;
|
int id, writer;
|
||||||
string path;
|
|
||||||
int num_fields;
|
int num_fields;
|
||||||
|
logging::WriterBackend::WriterInfo info;
|
||||||
|
|
||||||
bool success = fmt.Read(&id, "id") &&
|
bool success = fmt.Read(&id, "id") &&
|
||||||
fmt.Read(&writer, "writer") &&
|
fmt.Read(&writer, "writer") &&
|
||||||
fmt.Read(&path, "path") &&
|
fmt.Read(&num_fields, "num_fields") &&
|
||||||
fmt.Read(&num_fields, "num_fields");
|
info.Read(&fmt);
|
||||||
|
|
||||||
if ( ! success )
|
if ( ! success )
|
||||||
goto error;
|
goto error;
|
||||||
|
@ -2716,7 +2716,7 @@ bool RemoteSerializer::ProcessLogCreateWriter()
|
||||||
id_val = new EnumVal(id, BifType::Enum::Log::ID);
|
id_val = new EnumVal(id, BifType::Enum::Log::ID);
|
||||||
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
|
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
|
||||||
|
|
||||||
if ( ! log_mgr->CreateWriter(id_val, writer_val, path, num_fields, fields, true, false) )
|
if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields, true, false) )
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
Unref(id_val);
|
Unref(id_val);
|
||||||
|
|
|
@ -9,6 +9,7 @@
|
||||||
#include "IOSource.h"
|
#include "IOSource.h"
|
||||||
#include "Stats.h"
|
#include "Stats.h"
|
||||||
#include "File.h"
|
#include "File.h"
|
||||||
|
#include "logging/WriterBackend.h"
|
||||||
|
|
||||||
#include <vector>
|
#include <vector>
|
||||||
#include <string>
|
#include <string>
|
||||||
|
@ -104,10 +105,10 @@ public:
|
||||||
bool SendPrintHookEvent(BroFile* f, const char* txt, size_t len);
|
bool SendPrintHookEvent(BroFile* f, const char* txt, size_t len);
|
||||||
|
|
||||||
// Send a request to create a writer on a remote side.
|
// Send a request to create a writer on a remote side.
|
||||||
bool SendLogCreateWriter(PeerID peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields);
|
bool SendLogCreateWriter(PeerID peer, EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields);
|
||||||
|
|
||||||
// Broadcasts a request to create a writer.
|
// Broadcasts a request to create a writer.
|
||||||
bool SendLogCreateWriter(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Field* const * fields);
|
bool SendLogCreateWriter(EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields);
|
||||||
|
|
||||||
// Broadcast a log entry to everybody interested.
|
// Broadcast a log entry to everybody interested.
|
||||||
bool SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals);
|
bool SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals);
|
||||||
|
|
79
src/SOCKS.cc
Normal file
79
src/SOCKS.cc
Normal file
|
@ -0,0 +1,79 @@
|
||||||
|
#include "SOCKS.h"
|
||||||
|
#include "socks_pac.h"
|
||||||
|
#include "TCP_Reassembler.h"
|
||||||
|
|
||||||
|
SOCKS_Analyzer::SOCKS_Analyzer(Connection* conn)
|
||||||
|
: TCP_ApplicationAnalyzer(AnalyzerTag::SOCKS, conn)
|
||||||
|
{
|
||||||
|
interp = new binpac::SOCKS::SOCKS_Conn(this);
|
||||||
|
orig_done = resp_done = false;
|
||||||
|
pia = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
SOCKS_Analyzer::~SOCKS_Analyzer()
|
||||||
|
{
|
||||||
|
delete interp;
|
||||||
|
}
|
||||||
|
|
||||||
|
void SOCKS_Analyzer::EndpointDone(bool orig)
|
||||||
|
{
|
||||||
|
if ( orig )
|
||||||
|
orig_done = true;
|
||||||
|
else
|
||||||
|
resp_done = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void SOCKS_Analyzer::Done()
|
||||||
|
{
|
||||||
|
TCP_ApplicationAnalyzer::Done();
|
||||||
|
|
||||||
|
interp->FlowEOF(true);
|
||||||
|
interp->FlowEOF(false);
|
||||||
|
}
|
||||||
|
|
||||||
|
void SOCKS_Analyzer::EndpointEOF(TCP_Reassembler* endp)
|
||||||
|
{
|
||||||
|
TCP_ApplicationAnalyzer::EndpointEOF(endp);
|
||||||
|
interp->FlowEOF(endp->IsOrig());
|
||||||
|
}
|
||||||
|
|
||||||
|
void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
|
||||||
|
{
|
||||||
|
TCP_ApplicationAnalyzer::DeliverStream(len, data, orig);
|
||||||
|
|
||||||
|
assert(TCP());
|
||||||
|
|
||||||
|
if ( TCP()->IsPartial() )
|
||||||
|
// punt on partial.
|
||||||
|
return;
|
||||||
|
|
||||||
|
if ( orig_done && resp_done )
|
||||||
|
{
|
||||||
|
// Finished decapsulating tunnel layer. Now do standard processing
|
||||||
|
// with the rest of the conneciton.
|
||||||
|
//
|
||||||
|
// Note that we assume that no payload data arrives before both endpoints
|
||||||
|
// are done with there part of the SOCKS protocol.
|
||||||
|
|
||||||
|
if ( ! pia )
|
||||||
|
{
|
||||||
|
pia = new PIA_TCP(Conn());
|
||||||
|
AddChildAnalyzer(pia);
|
||||||
|
pia->FirstPacket(true, 0);
|
||||||
|
pia->FirstPacket(false, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
ForwardStream(len, data, orig);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
interp->NewData(orig, data, data + len);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void SOCKS_Analyzer::Undelivered(int seq, int len, bool orig)
|
||||||
|
{
|
||||||
|
TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);
|
||||||
|
interp->NewGap(orig, len);
|
||||||
|
}
|
||||||
|
|
45
src/SOCKS.h
Normal file
45
src/SOCKS.h
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
#ifndef socks_h
|
||||||
|
#define socks_h
|
||||||
|
|
||||||
|
// SOCKS v4 analyzer.
|
||||||
|
|
||||||
|
#include "TCP.h"
|
||||||
|
#include "PIA.h"
|
||||||
|
|
||||||
|
namespace binpac {
|
||||||
|
namespace SOCKS {
|
||||||
|
class SOCKS_Conn;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class SOCKS_Analyzer : public TCP_ApplicationAnalyzer {
|
||||||
|
public:
|
||||||
|
SOCKS_Analyzer(Connection* conn);
|
||||||
|
~SOCKS_Analyzer();
|
||||||
|
|
||||||
|
void EndpointDone(bool orig);
|
||||||
|
|
||||||
|
virtual void Done();
|
||||||
|
virtual void DeliverStream(int len, const u_char* data, bool orig);
|
||||||
|
virtual void Undelivered(int seq, int len, bool orig);
|
||||||
|
virtual void EndpointEOF(TCP_Reassembler* endp);
|
||||||
|
|
||||||
|
static Analyzer* InstantiateAnalyzer(Connection* conn)
|
||||||
|
{ return new SOCKS_Analyzer(conn); }
|
||||||
|
|
||||||
|
static bool Available()
|
||||||
|
{
|
||||||
|
return socks_request || socks_reply;
|
||||||
|
}
|
||||||
|
|
||||||
|
protected:
|
||||||
|
|
||||||
|
bool orig_done;
|
||||||
|
bool resp_done;
|
||||||
|
|
||||||
|
PIA_TCP *pia;
|
||||||
|
binpac::SOCKS::SOCKS_Conn* interp;
|
||||||
|
};
|
||||||
|
|
||||||
|
#endif
|
284
src/Sessions.cc
284
src/Sessions.cc
|
@ -30,6 +30,7 @@
|
||||||
#include "DPM.h"
|
#include "DPM.h"
|
||||||
|
|
||||||
#include "PacketSort.h"
|
#include "PacketSort.h"
|
||||||
|
#include "TunnelEncapsulation.h"
|
||||||
|
|
||||||
// These represent NetBIOS services on ephemeral ports. They're numbered
|
// These represent NetBIOS services on ephemeral ports. They're numbered
|
||||||
// so that we can use a single int to hold either an actual TCP/UDP server
|
// so that we can use a single int to hold either an actual TCP/UDP server
|
||||||
|
@ -67,6 +68,26 @@ void TimerMgrExpireTimer::Dispatch(double t, int is_expire)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void IPTunnelTimer::Dispatch(double t, int is_expire)
|
||||||
|
{
|
||||||
|
NetSessions::IPTunnelMap::const_iterator it =
|
||||||
|
sessions->ip_tunnels.find(tunnel_idx);
|
||||||
|
|
||||||
|
if ( it == sessions->ip_tunnels.end() )
|
||||||
|
return;
|
||||||
|
|
||||||
|
double last_active = it->second.second;
|
||||||
|
double inactive_time = t > last_active ? t - last_active : 0;
|
||||||
|
|
||||||
|
if ( inactive_time >= BifConst::Tunnel::ip_tunnel_timeout )
|
||||||
|
// tunnel activity timed out, delete it from map
|
||||||
|
sessions->ip_tunnels.erase(tunnel_idx);
|
||||||
|
|
||||||
|
else if ( ! is_expire )
|
||||||
|
// tunnel activity didn't timeout, schedule another timer
|
||||||
|
timer_mgr->Add(new IPTunnelTimer(t, tunnel_idx));
|
||||||
|
}
|
||||||
|
|
||||||
NetSessions::NetSessions()
|
NetSessions::NetSessions()
|
||||||
{
|
{
|
||||||
TypeList* t = new TypeList();
|
TypeList* t = new TypeList();
|
||||||
|
@ -142,16 +163,6 @@ void NetSessions::Done()
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace // private namespace
|
|
||||||
{
|
|
||||||
bool looks_like_IPv4_packet(int len, const struct ip* ip_hdr)
|
|
||||||
{
|
|
||||||
if ( len < int(sizeof(struct ip)) )
|
|
||||||
return false;
|
|
||||||
return ip_hdr->ip_v == 4 && ntohs(ip_hdr->ip_len) == len;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
|
void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
const u_char* pkt, int hdr_size,
|
const u_char* pkt, int hdr_size,
|
||||||
PktSrc* src_ps, PacketSortElement* pkt_elem)
|
PktSrc* src_ps, PacketSortElement* pkt_elem)
|
||||||
|
@ -168,60 +179,8 @@ void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( encap_hdr_size > 0 && ip_data )
|
if ( encap_hdr_size > 0 && ip_data )
|
||||||
{
|
|
||||||
// We're doing tunnel encapsulation. Check whether there's
|
|
||||||
// a particular associated port.
|
|
||||||
//
|
|
||||||
// Should we discourage the use of encap_hdr_size for UDP
|
|
||||||
// tunnneling? It is probably better handled by enabling
|
|
||||||
// BifConst::parse_udp_tunnels instead of specifying a fixed
|
|
||||||
// encap_hdr_size.
|
|
||||||
if ( udp_tunnel_port > 0 )
|
|
||||||
{
|
|
||||||
ASSERT(ip_hdr);
|
|
||||||
if ( ip_hdr->ip_p == IPPROTO_UDP )
|
|
||||||
{
|
|
||||||
const struct udphdr* udp_hdr =
|
|
||||||
reinterpret_cast<const struct udphdr*>
|
|
||||||
(ip_data);
|
|
||||||
|
|
||||||
if ( ntohs(udp_hdr->uh_dport) == udp_tunnel_port )
|
|
||||||
{
|
|
||||||
// A match.
|
|
||||||
hdr_size += encap_hdr_size;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
else
|
|
||||||
// Blanket encapsulation
|
// Blanket encapsulation
|
||||||
hdr_size += encap_hdr_size;
|
hdr_size += encap_hdr_size;
|
||||||
}
|
|
||||||
|
|
||||||
// Check IP packets encapsulated through UDP tunnels.
|
|
||||||
// Specifying a udp_tunnel_port is optional but recommended (to avoid
|
|
||||||
// the cost of checking every UDP packet).
|
|
||||||
else if ( BifConst::parse_udp_tunnels && ip_data && ip_hdr->ip_p == IPPROTO_UDP )
|
|
||||||
{
|
|
||||||
const struct udphdr* udp_hdr =
|
|
||||||
reinterpret_cast<const struct udphdr*>(ip_data);
|
|
||||||
|
|
||||||
if ( udp_tunnel_port == 0 || // 0 matches any port
|
|
||||||
udp_tunnel_port == ntohs(udp_hdr->uh_dport) )
|
|
||||||
{
|
|
||||||
const u_char* udp_data =
|
|
||||||
ip_data + sizeof(struct udphdr);
|
|
||||||
const struct ip* ip_encap =
|
|
||||||
reinterpret_cast<const struct ip*>(udp_data);
|
|
||||||
const int ip_encap_len =
|
|
||||||
ntohs(udp_hdr->uh_ulen) - sizeof(struct udphdr);
|
|
||||||
const int ip_encap_caplen =
|
|
||||||
hdr->caplen - (udp_data - pkt);
|
|
||||||
|
|
||||||
if ( looks_like_IPv4_packet(ip_encap_len, ip_encap) )
|
|
||||||
hdr_size = udp_data - pkt;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ( src_ps->FilterType() == TYPE_FILTER_NORMAL )
|
if ( src_ps->FilterType() == TYPE_FILTER_NORMAL )
|
||||||
NextPacket(t, hdr, pkt, hdr_size, pkt_elem);
|
NextPacket(t, hdr, pkt, hdr_size, pkt_elem);
|
||||||
|
@ -251,7 +210,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
// difference here is that header extraction in
|
// difference here is that header extraction in
|
||||||
// PacketSort does not generate Weird events.
|
// PacketSort does not generate Weird events.
|
||||||
|
|
||||||
DoNextPacket(t, hdr, pkt_elem->IPHdr(), pkt, hdr_size);
|
DoNextPacket(t, hdr, pkt_elem->IPHdr(), pkt, hdr_size, 0);
|
||||||
|
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
|
@ -276,7 +235,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
if ( ip->ip_v == 4 )
|
if ( ip->ip_v == 4 )
|
||||||
{
|
{
|
||||||
IP_Hdr ip_hdr(ip, false);
|
IP_Hdr ip_hdr(ip, false);
|
||||||
DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size);
|
DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
else if ( ip->ip_v == 6 )
|
else if ( ip->ip_v == 6 )
|
||||||
|
@ -288,7 +247,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
}
|
}
|
||||||
|
|
||||||
IP_Hdr ip_hdr((const struct ip6_hdr*) (pkt + hdr_size), false, caplen);
|
IP_Hdr ip_hdr((const struct ip6_hdr*) (pkt + hdr_size), false, caplen);
|
||||||
DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size);
|
DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
else if ( ARP_Analyzer::IsARP(pkt, hdr_size) )
|
else if ( ARP_Analyzer::IsARP(pkt, hdr_size) )
|
||||||
|
@ -410,7 +369,7 @@ int NetSessions::CheckConnectionTag(Connection* conn)
|
||||||
|
|
||||||
void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
const IP_Hdr* ip_hdr, const u_char* const pkt,
|
const IP_Hdr* ip_hdr, const u_char* const pkt,
|
||||||
int hdr_size)
|
int hdr_size, const EncapsulationStack* encapsulation)
|
||||||
{
|
{
|
||||||
uint32 caplen = hdr->caplen - hdr_size;
|
uint32 caplen = hdr->caplen - hdr_size;
|
||||||
const struct ip* ip4 = ip_hdr->IP4_Hdr();
|
const struct ip* ip4 = ip_hdr->IP4_Hdr();
|
||||||
|
@ -418,7 +377,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
uint32 len = ip_hdr->TotalLen();
|
uint32 len = ip_hdr->TotalLen();
|
||||||
if ( hdr->len < len + hdr_size )
|
if ( hdr->len < len + hdr_size )
|
||||||
{
|
{
|
||||||
Weird("truncated_IP", hdr, pkt);
|
Weird("truncated_IP", hdr, pkt, encapsulation);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -430,7 +389,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
if ( ! ignore_checksums && ip4 &&
|
if ( ! ignore_checksums && ip4 &&
|
||||||
ones_complement_checksum((void*) ip4, ip_hdr_len, 0) != 0xffff )
|
ones_complement_checksum((void*) ip4, ip_hdr_len, 0) != 0xffff )
|
||||||
{
|
{
|
||||||
Weird("bad_IP_checksum", hdr, pkt);
|
Weird("bad_IP_checksum", hdr, pkt, encapsulation);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -445,7 +404,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
|
|
||||||
if ( caplen < len )
|
if ( caplen < len )
|
||||||
{
|
{
|
||||||
Weird("incompletely_captured_fragment", ip_hdr);
|
Weird("incompletely_captured_fragment", ip_hdr, encapsulation);
|
||||||
|
|
||||||
// Don't try to reassemble, that's doomed.
|
// Don't try to reassemble, that's doomed.
|
||||||
// Discard all except the first fragment (which
|
// Discard all except the first fragment (which
|
||||||
|
@ -497,7 +456,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
|
|
||||||
if ( ! ignore_checksums && mobility_header_checksum(ip_hdr) != 0xffff )
|
if ( ! ignore_checksums && mobility_header_checksum(ip_hdr) != 0xffff )
|
||||||
{
|
{
|
||||||
Weird("bad_MH_checksum", hdr, pkt);
|
Weird("bad_MH_checksum", hdr, pkt, encapsulation);
|
||||||
Remove(f);
|
Remove(f);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -510,7 +469,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( ip_hdr->NextProto() != IPPROTO_NONE )
|
if ( ip_hdr->NextProto() != IPPROTO_NONE )
|
||||||
Weird("mobility_piggyback", hdr, pkt);
|
Weird("mobility_piggyback", hdr, pkt, encapsulation);
|
||||||
|
|
||||||
Remove(f);
|
Remove(f);
|
||||||
return;
|
return;
|
||||||
|
@ -519,7 +478,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
|
|
||||||
int proto = ip_hdr->NextProto();
|
int proto = ip_hdr->NextProto();
|
||||||
|
|
||||||
if ( CheckHeaderTrunc(proto, len, caplen, hdr, pkt) )
|
if ( CheckHeaderTrunc(proto, len, caplen, hdr, pkt, encapsulation) )
|
||||||
{
|
{
|
||||||
Remove(f);
|
Remove(f);
|
||||||
return;
|
return;
|
||||||
|
@ -585,8 +544,83 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
case IPPROTO_IPV4:
|
||||||
|
case IPPROTO_IPV6:
|
||||||
|
{
|
||||||
|
if ( ! BifConst::Tunnel::enable_ip )
|
||||||
|
{
|
||||||
|
Weird("IP_tunnel", ip_hdr, encapsulation);
|
||||||
|
Remove(f);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( encapsulation &&
|
||||||
|
encapsulation->Depth() >= BifConst::Tunnel::max_depth )
|
||||||
|
{
|
||||||
|
Weird("exceeded_tunnel_max_depth", ip_hdr, encapsulation);
|
||||||
|
Remove(f);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for a valid inner packet first.
|
||||||
|
IP_Hdr* inner = 0;
|
||||||
|
int result = ParseIPPacket(caplen, data, proto, inner);
|
||||||
|
|
||||||
|
if ( result < 0 )
|
||||||
|
Weird("truncated_inner_IP", ip_hdr, encapsulation);
|
||||||
|
|
||||||
|
else if ( result > 0 )
|
||||||
|
Weird("inner_IP_payload_length_mismatch", ip_hdr, encapsulation);
|
||||||
|
|
||||||
|
if ( result != 0 )
|
||||||
|
{
|
||||||
|
delete inner;
|
||||||
|
Remove(f);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Look up to see if we've already seen this IP tunnel, identified
|
||||||
|
// by the pair of IP addresses, so that we can always associate the
|
||||||
|
// same UID with it.
|
||||||
|
IPPair tunnel_idx;
|
||||||
|
if ( ip_hdr->SrcAddr() < ip_hdr->DstAddr() )
|
||||||
|
tunnel_idx = IPPair(ip_hdr->SrcAddr(), ip_hdr->DstAddr());
|
||||||
|
else
|
||||||
|
tunnel_idx = IPPair(ip_hdr->DstAddr(), ip_hdr->SrcAddr());
|
||||||
|
|
||||||
|
IPTunnelMap::iterator it = ip_tunnels.find(tunnel_idx);
|
||||||
|
|
||||||
|
if ( it == ip_tunnels.end() )
|
||||||
|
{
|
||||||
|
EncapsulatingConn ec(ip_hdr->SrcAddr(), ip_hdr->DstAddr());
|
||||||
|
ip_tunnels[tunnel_idx] = TunnelActivity(ec, network_time);
|
||||||
|
timer_mgr->Add(new IPTunnelTimer(network_time, tunnel_idx));
|
||||||
|
}
|
||||||
|
else
|
||||||
|
it->second.second = network_time;
|
||||||
|
|
||||||
|
DoNextInnerPacket(t, hdr, inner, encapsulation,
|
||||||
|
ip_tunnels[tunnel_idx].first);
|
||||||
|
|
||||||
|
Remove(f);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
case IPPROTO_NONE:
|
||||||
|
{
|
||||||
|
// If the packet is encapsulated in Teredo, then it was a bubble and
|
||||||
|
// the Teredo analyzer may have raised an event for that, else we're
|
||||||
|
// not sure the reason for the No Next header in the packet.
|
||||||
|
if ( ! ( encapsulation &&
|
||||||
|
encapsulation->LastType() == BifEnum::Tunnel::TEREDO ) )
|
||||||
|
Weird("ipv6_no_next", hdr, pkt);
|
||||||
|
|
||||||
|
Remove(f);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
default:
|
default:
|
||||||
Weird(fmt("unknown_protocol_%d", proto), hdr, pkt);
|
Weird(fmt("unknown_protocol_%d", proto), hdr, pkt, encapsulation);
|
||||||
Remove(f);
|
Remove(f);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -602,7 +636,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
conn = (Connection*) d->Lookup(h);
|
conn = (Connection*) d->Lookup(h);
|
||||||
if ( ! conn )
|
if ( ! conn )
|
||||||
{
|
{
|
||||||
conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel());
|
conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation);
|
||||||
if ( conn )
|
if ( conn )
|
||||||
d->Insert(h, conn);
|
d->Insert(h, conn);
|
||||||
}
|
}
|
||||||
|
@ -623,12 +657,15 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
conn->Event(connection_reused, 0);
|
conn->Event(connection_reused, 0);
|
||||||
|
|
||||||
Remove(conn);
|
Remove(conn);
|
||||||
conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel());
|
conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation);
|
||||||
if ( conn )
|
if ( conn )
|
||||||
d->Insert(h, conn);
|
d->Insert(h, conn);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
|
{
|
||||||
delete h;
|
delete h;
|
||||||
|
conn->CheckEncapsulation(encapsulation);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( ! conn )
|
if ( ! conn )
|
||||||
|
@ -682,8 +719,70 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void NetSessions::DoNextInnerPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
|
const IP_Hdr* inner, const EncapsulationStack* prev,
|
||||||
|
const EncapsulatingConn& ec)
|
||||||
|
{
|
||||||
|
struct pcap_pkthdr fake_hdr;
|
||||||
|
fake_hdr.caplen = fake_hdr.len = inner->TotalLen();
|
||||||
|
|
||||||
|
if ( hdr )
|
||||||
|
fake_hdr.ts = hdr->ts;
|
||||||
|
else
|
||||||
|
{
|
||||||
|
fake_hdr.ts.tv_sec = (time_t) network_time;
|
||||||
|
fake_hdr.ts.tv_usec = (suseconds_t)
|
||||||
|
((network_time - (double)fake_hdr.ts.tv_sec) * 1000000);
|
||||||
|
}
|
||||||
|
|
||||||
|
const u_char* pkt = 0;
|
||||||
|
|
||||||
|
if ( inner->IP4_Hdr() )
|
||||||
|
pkt = (const u_char*) inner->IP4_Hdr();
|
||||||
|
else
|
||||||
|
pkt = (const u_char*) inner->IP6_Hdr();
|
||||||
|
|
||||||
|
EncapsulationStack* outer = prev ?
|
||||||
|
new EncapsulationStack(*prev) : new EncapsulationStack();
|
||||||
|
outer->Add(ec);
|
||||||
|
|
||||||
|
DoNextPacket(t, &fake_hdr, inner, pkt, 0, outer);
|
||||||
|
|
||||||
|
delete inner;
|
||||||
|
delete outer;
|
||||||
|
}
|
||||||
|
|
||||||
|
int NetSessions::ParseIPPacket(int caplen, const u_char* const pkt, int proto,
|
||||||
|
IP_Hdr*& inner)
|
||||||
|
{
|
||||||
|
if ( proto == IPPROTO_IPV6 )
|
||||||
|
{
|
||||||
|
if ( caplen < (int)sizeof(struct ip6_hdr) )
|
||||||
|
return -1;
|
||||||
|
|
||||||
|
inner = new IP_Hdr((const struct ip6_hdr*) pkt, false, caplen);
|
||||||
|
}
|
||||||
|
|
||||||
|
else if ( proto == IPPROTO_IPV4 )
|
||||||
|
{
|
||||||
|
if ( caplen < (int)sizeof(struct ip) )
|
||||||
|
return -1;
|
||||||
|
|
||||||
|
inner = new IP_Hdr((const struct ip*) pkt, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
else
|
||||||
|
reporter->InternalError("Bad IP protocol version in DoNextInnerPacket");
|
||||||
|
|
||||||
|
if ( (uint32)caplen != inner->TotalLen() )
|
||||||
|
return (uint32)caplen < inner->TotalLen() ? -1 : 1;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
|
bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
|
||||||
const struct pcap_pkthdr* h, const u_char* p)
|
const struct pcap_pkthdr* h,
|
||||||
|
const u_char* p, const EncapsulationStack* encap)
|
||||||
{
|
{
|
||||||
uint32 min_hdr_len = 0;
|
uint32 min_hdr_len = 0;
|
||||||
switch ( proto ) {
|
switch ( proto ) {
|
||||||
|
@ -693,22 +792,32 @@ bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
|
||||||
case IPPROTO_UDP:
|
case IPPROTO_UDP:
|
||||||
min_hdr_len = sizeof(struct udphdr);
|
min_hdr_len = sizeof(struct udphdr);
|
||||||
break;
|
break;
|
||||||
|
case IPPROTO_IPV4:
|
||||||
|
min_hdr_len = sizeof(struct ip);
|
||||||
|
break;
|
||||||
|
case IPPROTO_IPV6:
|
||||||
|
min_hdr_len = sizeof(struct ip6_hdr);
|
||||||
|
break;
|
||||||
|
case IPPROTO_NONE:
|
||||||
|
min_hdr_len = 0;
|
||||||
|
break;
|
||||||
case IPPROTO_ICMP:
|
case IPPROTO_ICMP:
|
||||||
case IPPROTO_ICMPV6:
|
case IPPROTO_ICMPV6:
|
||||||
default:
|
default:
|
||||||
// Use for all other packets.
|
// Use for all other packets.
|
||||||
min_hdr_len = ICMP_MINLEN;
|
min_hdr_len = ICMP_MINLEN;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( len < min_hdr_len )
|
if ( len < min_hdr_len )
|
||||||
{
|
{
|
||||||
Weird("truncated_header", h, p);
|
Weird("truncated_header", h, p, encap);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( caplen < min_hdr_len )
|
if ( caplen < min_hdr_len )
|
||||||
{
|
{
|
||||||
Weird("internally_truncated_header", h, p);
|
Weird("internally_truncated_header", h, p, encap);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1004,7 +1113,8 @@ void NetSessions::GetStats(SessionStats& s) const
|
||||||
}
|
}
|
||||||
|
|
||||||
Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
|
Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
|
||||||
const u_char* data, int proto, uint32 flow_label)
|
const u_char* data, int proto, uint32 flow_label,
|
||||||
|
const EncapsulationStack* encapsulation)
|
||||||
{
|
{
|
||||||
// FIXME: This should be cleaned up a bit, it's too protocol-specific.
|
// FIXME: This should be cleaned up a bit, it's too protocol-specific.
|
||||||
// But I'm not yet sure what the right abstraction for these things is.
|
// But I'm not yet sure what the right abstraction for these things is.
|
||||||
|
@ -1060,7 +1170,7 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
|
||||||
id = &flip_id;
|
id = &flip_id;
|
||||||
}
|
}
|
||||||
|
|
||||||
Connection* conn = new Connection(this, k, t, id, flow_label);
|
Connection* conn = new Connection(this, k, t, id, flow_label, encapsulation);
|
||||||
conn->SetTransport(tproto);
|
conn->SetTransport(tproto);
|
||||||
dpm->BuildInitialAnalyzerTree(tproto, conn, data);
|
dpm->BuildInitialAnalyzerTree(tproto, conn, data);
|
||||||
|
|
||||||
|
@ -1224,17 +1334,25 @@ void NetSessions::Internal(const char* msg, const struct pcap_pkthdr* hdr,
|
||||||
reporter->InternalError("%s", msg);
|
reporter->InternalError("%s", msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
void NetSessions::Weird(const char* name,
|
void NetSessions::Weird(const char* name, const struct pcap_pkthdr* hdr,
|
||||||
const struct pcap_pkthdr* hdr, const u_char* pkt)
|
const u_char* pkt, const EncapsulationStack* encap)
|
||||||
{
|
{
|
||||||
if ( hdr )
|
if ( hdr )
|
||||||
dump_this_packet = 1;
|
dump_this_packet = 1;
|
||||||
|
|
||||||
|
if ( encap && encap->LastType() != BifEnum::Tunnel::NONE )
|
||||||
|
reporter->Weird(fmt("%s_in_tunnel", name));
|
||||||
|
else
|
||||||
reporter->Weird(name);
|
reporter->Weird(name);
|
||||||
}
|
}
|
||||||
|
|
||||||
void NetSessions::Weird(const char* name, const IP_Hdr* ip)
|
void NetSessions::Weird(const char* name, const IP_Hdr* ip,
|
||||||
|
const EncapsulationStack* encap)
|
||||||
{
|
{
|
||||||
|
if ( encap && encap->LastType() != BifEnum::Tunnel::NONE )
|
||||||
|
reporter->Weird(ip->SrcAddr(), ip->DstAddr(),
|
||||||
|
fmt("%s_in_tunnel", name));
|
||||||
|
else
|
||||||
reporter->Weird(ip->SrcAddr(), ip->DstAddr(), name);
|
reporter->Weird(ip->SrcAddr(), ip->DstAddr(), name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -11,9 +11,12 @@
|
||||||
#include "PacketFilter.h"
|
#include "PacketFilter.h"
|
||||||
#include "Stats.h"
|
#include "Stats.h"
|
||||||
#include "NetVar.h"
|
#include "NetVar.h"
|
||||||
|
#include "TunnelEncapsulation.h"
|
||||||
|
#include <utility>
|
||||||
|
|
||||||
struct pcap_pkthdr;
|
struct pcap_pkthdr;
|
||||||
|
|
||||||
|
class EncapsulationStack;
|
||||||
class Connection;
|
class Connection;
|
||||||
class ConnID;
|
class ConnID;
|
||||||
class OSFingerprint;
|
class OSFingerprint;
|
||||||
|
@ -105,9 +108,10 @@ public:
|
||||||
|
|
||||||
void GetStats(SessionStats& s) const;
|
void GetStats(SessionStats& s) const;
|
||||||
|
|
||||||
void Weird(const char* name,
|
void Weird(const char* name, const struct pcap_pkthdr* hdr,
|
||||||
const struct pcap_pkthdr* hdr, const u_char* pkt);
|
const u_char* pkt, const EncapsulationStack* encap = 0);
|
||||||
void Weird(const char* name, const IP_Hdr* ip);
|
void Weird(const char* name, const IP_Hdr* ip,
|
||||||
|
const EncapsulationStack* encap = 0);
|
||||||
|
|
||||||
PacketFilter* GetPacketFilter()
|
PacketFilter* GetPacketFilter()
|
||||||
{
|
{
|
||||||
|
@ -131,6 +135,51 @@ public:
|
||||||
icmp_conns.Length();
|
icmp_conns.Length();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
|
const IP_Hdr* ip_hdr, const u_char* const pkt,
|
||||||
|
int hdr_size, const EncapsulationStack* encapsulation);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Wrapper that recurses on DoNextPacket for encapsulated IP packets.
|
||||||
|
*
|
||||||
|
* @param t Network time.
|
||||||
|
* @param hdr If the outer pcap header is available, this pointer can be set
|
||||||
|
* so that the fake pcap header passed to DoNextPacket will use
|
||||||
|
* the same timeval. The caplen and len fields of the fake pcap
|
||||||
|
* header are always set to the TotalLength() of \a inner.
|
||||||
|
* @param inner Pointer to IP header wrapper of the inner packet, ownership
|
||||||
|
* of the pointer's memory is assumed by this function.
|
||||||
|
* @param prev Any previous encapsulation stack of the caller, not including
|
||||||
|
* the most-recently found depth of encapsulation.
|
||||||
|
* @param ec The most-recently found depth of encapsulation.
|
||||||
|
*/
|
||||||
|
void DoNextInnerPacket(double t, const struct pcap_pkthdr* hdr,
|
||||||
|
const IP_Hdr* inner, const EncapsulationStack* prev,
|
||||||
|
const EncapsulatingConn& ec);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns a wrapper IP_Hdr object if \a pkt appears to be a valid IPv4
|
||||||
|
* or IPv6 header based on whether it's long enough to contain such a header
|
||||||
|
* and also that the payload length field of that header matches the actual
|
||||||
|
* length of \a pkt given by \a caplen.
|
||||||
|
*
|
||||||
|
* @param caplen The length of \a pkt in bytes.
|
||||||
|
* @param pkt The inner IP packet data.
|
||||||
|
* @param proto Either IPPROTO_IPV6 or IPPROTO_IPV4 to indicate which IP
|
||||||
|
* protocol \a pkt corresponds to.
|
||||||
|
* @param inner The inner IP packet wrapper pointer to be allocated/assigned
|
||||||
|
* if \a pkt looks like a valid IP packet or at least long enough
|
||||||
|
* to hold an IP header.
|
||||||
|
* @return 0 If the inner IP packet appeared valid, else -1 if \a caplen
|
||||||
|
* is greater than the supposed IP packet's payload length field or
|
||||||
|
* 1 if \a caplen is less than the supposed packet's payload length.
|
||||||
|
* In the -1 case, \a inner may still be non-null if \a caplen was
|
||||||
|
* long enough to be an IP header, and \a inner is always non-null
|
||||||
|
* for other return values.
|
||||||
|
*/
|
||||||
|
int ParseIPPacket(int caplen, const u_char* const pkt, int proto,
|
||||||
|
IP_Hdr*& inner);
|
||||||
|
|
||||||
unsigned int ConnectionMemoryUsage();
|
unsigned int ConnectionMemoryUsage();
|
||||||
unsigned int ConnectionMemoryUsageConnVals();
|
unsigned int ConnectionMemoryUsageConnVals();
|
||||||
unsigned int MemoryAllocation();
|
unsigned int MemoryAllocation();
|
||||||
|
@ -140,9 +189,11 @@ protected:
|
||||||
friend class RemoteSerializer;
|
friend class RemoteSerializer;
|
||||||
friend class ConnCompressor;
|
friend class ConnCompressor;
|
||||||
friend class TimerMgrExpireTimer;
|
friend class TimerMgrExpireTimer;
|
||||||
|
friend class IPTunnelTimer;
|
||||||
|
|
||||||
Connection* NewConn(HashKey* k, double t, const ConnID* id,
|
Connection* NewConn(HashKey* k, double t, const ConnID* id,
|
||||||
const u_char* data, int proto, uint32 flow_label);
|
const u_char* data, int proto, uint32 flow_lable,
|
||||||
|
const EncapsulationStack* encapsulation);
|
||||||
|
|
||||||
// Check whether the tag of the current packet is consistent with
|
// Check whether the tag of the current packet is consistent with
|
||||||
// the given connection. Returns:
|
// the given connection. Returns:
|
||||||
|
@ -173,10 +224,6 @@ protected:
|
||||||
const u_char* const pkt, int hdr_size,
|
const u_char* const pkt, int hdr_size,
|
||||||
PacketSortElement* pkt_elem);
|
PacketSortElement* pkt_elem);
|
||||||
|
|
||||||
void DoNextPacket(double t, const struct pcap_pkthdr* hdr,
|
|
||||||
const IP_Hdr* ip_hdr, const u_char* const pkt,
|
|
||||||
int hdr_size);
|
|
||||||
|
|
||||||
void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr,
|
void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr,
|
||||||
const u_char* const pkt, int hdr_size,
|
const u_char* const pkt, int hdr_size,
|
||||||
const PktSrc* src_ps);
|
const PktSrc* src_ps);
|
||||||
|
@ -194,7 +241,8 @@ protected:
|
||||||
// from lower-level headers or the length actually captured is less
|
// from lower-level headers or the length actually captured is less
|
||||||
// than that protocol's minimum header size.
|
// than that protocol's minimum header size.
|
||||||
bool CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
|
bool CheckHeaderTrunc(int proto, uint32 len, uint32 caplen,
|
||||||
const struct pcap_pkthdr* hdr, const u_char* pkt);
|
const struct pcap_pkthdr* hdr, const u_char* pkt,
|
||||||
|
const EncapsulationStack* encap);
|
||||||
|
|
||||||
CompositeHash* ch;
|
CompositeHash* ch;
|
||||||
PDict(Connection) tcp_conns;
|
PDict(Connection) tcp_conns;
|
||||||
|
@ -202,6 +250,11 @@ protected:
|
||||||
PDict(Connection) icmp_conns;
|
PDict(Connection) icmp_conns;
|
||||||
PDict(FragReassembler) fragments;
|
PDict(FragReassembler) fragments;
|
||||||
|
|
||||||
|
typedef pair<IPAddr, IPAddr> IPPair;
|
||||||
|
typedef pair<EncapsulatingConn, double> TunnelActivity;
|
||||||
|
typedef std::map<IPPair, TunnelActivity> IPTunnelMap;
|
||||||
|
IPTunnelMap ip_tunnels;
|
||||||
|
|
||||||
ARP_Analyzer* arp_analyzer;
|
ARP_Analyzer* arp_analyzer;
|
||||||
|
|
||||||
SteppingStoneManager* stp_manager;
|
SteppingStoneManager* stp_manager;
|
||||||
|
@ -219,6 +272,21 @@ protected:
|
||||||
TimerMgrMap timer_mgrs;
|
TimerMgrMap timer_mgrs;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
class IPTunnelTimer : public Timer {
|
||||||
|
public:
|
||||||
|
IPTunnelTimer(double t, NetSessions::IPPair p)
|
||||||
|
: Timer(t + BifConst::Tunnel::ip_tunnel_timeout,
|
||||||
|
TIMER_IP_TUNNEL_INACTIVITY), tunnel_idx(p) {}
|
||||||
|
|
||||||
|
~IPTunnelTimer() {}
|
||||||
|
|
||||||
|
void Dispatch(double t, int is_expire);
|
||||||
|
|
||||||
|
protected:
|
||||||
|
NetSessions::IPPair tunnel_idx;
|
||||||
|
};
|
||||||
|
|
||||||
// Manager for the currently active sessions.
|
// Manager for the currently active sessions.
|
||||||
extern NetSessions* sessions;
|
extern NetSessions* sessions;
|
||||||
|
|
||||||
|
|
246
src/Teredo.cc
Normal file
246
src/Teredo.cc
Normal file
|
@ -0,0 +1,246 @@
|
||||||
|
|
||||||
|
#include "Teredo.h"
|
||||||
|
#include "IP.h"
|
||||||
|
#include "Reporter.h"
|
||||||
|
|
||||||
|
void Teredo_Analyzer::Done()
|
||||||
|
{
|
||||||
|
Analyzer::Done();
|
||||||
|
Event(udp_session_done);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool TeredoEncapsulation::DoParse(const u_char* data, int& len,
|
||||||
|
bool found_origin, bool found_auth)
|
||||||
|
{
|
||||||
|
if ( len < 2 )
|
||||||
|
{
|
||||||
|
Weird("truncated_Teredo");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
uint16 tag = ntohs((*((const uint16*)data)));
|
||||||
|
|
||||||
|
if ( tag == 0 )
|
||||||
|
{
|
||||||
|
// Origin Indication
|
||||||
|
if ( found_origin )
|
||||||
|
// can't have multiple origin indications
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if ( len < 8 )
|
||||||
|
{
|
||||||
|
Weird("truncated_Teredo_origin_indication");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
origin_indication = data;
|
||||||
|
len -= 8;
|
||||||
|
data += 8;
|
||||||
|
return DoParse(data, len, true, found_auth);
|
||||||
|
}
|
||||||
|
|
||||||
|
else if ( tag == 1 )
|
||||||
|
{
|
||||||
|
// Authentication
|
||||||
|
if ( found_origin || found_auth )
|
||||||
|
// can't have multiple authentication headers and can't come after
|
||||||
|
// an origin indication
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if ( len < 4 )
|
||||||
|
{
|
||||||
|
Weird("truncated_Teredo_authentication");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
uint8 id_len = data[2];
|
||||||
|
uint8 au_len = data[3];
|
||||||
|
uint16 tot_len = 4 + id_len + au_len + 8 + 1;
|
||||||
|
|
||||||
|
if ( len < tot_len )
|
||||||
|
{
|
||||||
|
Weird("truncated_Teredo_authentication");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
auth = data;
|
||||||
|
len -= tot_len;
|
||||||
|
data += tot_len;
|
||||||
|
return DoParse(data, len, found_origin, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
else if ( ((tag & 0xf000)>>12) == 6 )
|
||||||
|
{
|
||||||
|
// IPv6
|
||||||
|
if ( len < 40 )
|
||||||
|
{
|
||||||
|
Weird("truncated_IPv6_in_Teredo");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// There's at least a possible IPv6 header, we'll decide what to do
|
||||||
|
// later if the payload length field doesn't match the actual length
|
||||||
|
// of the packet.
|
||||||
|
inner_ip = data;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
RecordVal* TeredoEncapsulation::BuildVal(const IP_Hdr* inner) const
|
||||||
|
{
|
||||||
|
static RecordType* teredo_hdr_type = 0;
|
||||||
|
static RecordType* teredo_auth_type = 0;
|
||||||
|
static RecordType* teredo_origin_type = 0;
|
||||||
|
|
||||||
|
if ( ! teredo_hdr_type )
|
||||||
|
{
|
||||||
|
teredo_hdr_type = internal_type("teredo_hdr")->AsRecordType();
|
||||||
|
teredo_auth_type = internal_type("teredo_auth")->AsRecordType();
|
||||||
|
teredo_origin_type = internal_type("teredo_origin")->AsRecordType();
|
||||||
|
}
|
||||||
|
|
||||||
|
RecordVal* teredo_hdr = new RecordVal(teredo_hdr_type);
|
||||||
|
|
||||||
|
if ( auth )
|
||||||
|
{
|
||||||
|
RecordVal* teredo_auth = new RecordVal(teredo_auth_type);
|
||||||
|
uint8 id_len = *((uint8*)(auth + 2));
|
||||||
|
uint8 au_len = *((uint8*)(auth + 3));
|
||||||
|
uint64 nonce = ntohll(*((uint64*)(auth + 4 + id_len + au_len)));
|
||||||
|
uint8 conf = *((uint8*)(auth + 4 + id_len + au_len + 8));
|
||||||
|
teredo_auth->Assign(0, new StringVal(
|
||||||
|
new BroString(auth + 4, id_len, 1)));
|
||||||
|
teredo_auth->Assign(1, new StringVal(
|
||||||
|
new BroString(auth + 4 + id_len, au_len, 1)));
|
||||||
|
teredo_auth->Assign(2, new Val(nonce, TYPE_COUNT));
|
||||||
|
teredo_auth->Assign(3, new Val(conf, TYPE_COUNT));
|
||||||
|
teredo_hdr->Assign(0, teredo_auth);
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( origin_indication )
|
||||||
|
{
|
||||||
|
RecordVal* teredo_origin = new RecordVal(teredo_origin_type);
|
||||||
|
uint16 port = ntohs(*((uint16*)(origin_indication + 2))) ^ 0xFFFF;
|
||||||
|
uint32 addr = ntohl(*((uint32*)(origin_indication + 4))) ^ 0xFFFFFFFF;
|
||||||
|
teredo_origin->Assign(0, new PortVal(port, TRANSPORT_UDP));
|
||||||
|
teredo_origin->Assign(1, new AddrVal(htonl(addr)));
|
||||||
|
teredo_hdr->Assign(1, teredo_origin);
|
||||||
|
}
|
||||||
|
|
||||||
|
teredo_hdr->Assign(2, inner->BuildPktHdrVal());
|
||||||
|
return teredo_hdr;
|
||||||
|
}
|
||||||
|
|
||||||
|
void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
|
||||||
|
int seq, const IP_Hdr* ip, int caplen)
|
||||||
|
{
|
||||||
|
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
|
||||||
|
|
||||||
|
TeredoEncapsulation te(this);
|
||||||
|
|
||||||
|
if ( ! te.Parse(data, len) )
|
||||||
|
{
|
||||||
|
ProtocolViolation("Bad Teredo encapsulation", (const char*) data, len);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const EncapsulationStack* e = Conn()->GetEncapsulation();
|
||||||
|
|
||||||
|
if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
|
||||||
|
{
|
||||||
|
Weird("tunnel_depth");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
IP_Hdr* inner = 0;
|
||||||
|
int rslt = sessions->ParseIPPacket(len, te.InnerIP(), IPPROTO_IPV6, inner);
|
||||||
|
|
||||||
|
if ( rslt > 0 )
|
||||||
|
{
|
||||||
|
if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 )
|
||||||
|
// Teredo bubbles having data after IPv6 header isn't strictly a
|
||||||
|
// violation, but a little weird.
|
||||||
|
Weird("Teredo_bubble_with_payload");
|
||||||
|
else
|
||||||
|
{
|
||||||
|
delete inner;
|
||||||
|
ProtocolViolation("Teredo payload length", (const char*) data, len);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( rslt == 0 || rslt > 0 )
|
||||||
|
{
|
||||||
|
if ( BifConst::Tunnel::yielding_teredo_decapsulation &&
|
||||||
|
! ProtocolConfirmed() )
|
||||||
|
{
|
||||||
|
// Only confirm the Teredo tunnel and start decapsulating packets
|
||||||
|
// when no other sibling analyzer thinks it's already parsing the
|
||||||
|
// right protocol.
|
||||||
|
bool sibling_has_confirmed = false;
|
||||||
|
if ( Parent() )
|
||||||
|
{
|
||||||
|
LOOP_OVER_GIVEN_CONST_CHILDREN(i, Parent()->GetChildren())
|
||||||
|
{
|
||||||
|
if ( (*i)->ProtocolConfirmed() )
|
||||||
|
{
|
||||||
|
sibling_has_confirmed = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( ! sibling_has_confirmed )
|
||||||
|
ProtocolConfirmation();
|
||||||
|
else
|
||||||
|
{
|
||||||
|
delete inner;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
// Aggressively decapsulate anything with valid Teredo encapsulation
|
||||||
|
ProtocolConfirmation();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
else
|
||||||
|
{
|
||||||
|
delete inner;
|
||||||
|
ProtocolViolation("Truncated Teredo", (const char*) data, len);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
Val* teredo_hdr = 0;
|
||||||
|
|
||||||
|
if ( teredo_packet )
|
||||||
|
{
|
||||||
|
teredo_hdr = te.BuildVal(inner);
|
||||||
|
Conn()->Event(teredo_packet, 0, teredo_hdr);
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( te.Authentication() && teredo_authentication )
|
||||||
|
{
|
||||||
|
teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner);
|
||||||
|
Conn()->Event(teredo_authentication, 0, teredo_hdr);
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( te.OriginIndication() && teredo_origin_indication )
|
||||||
|
{
|
||||||
|
teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner);
|
||||||
|
Conn()->Event(teredo_origin_indication, 0, teredo_hdr);
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( inner->NextProto() == IPPROTO_NONE && teredo_bubble )
|
||||||
|
{
|
||||||
|
teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner);
|
||||||
|
Conn()->Event(teredo_bubble, 0, teredo_hdr);
|
||||||
|
}
|
||||||
|
|
||||||
|
EncapsulatingConn ec(Conn(), BifEnum::Tunnel::TEREDO);
|
||||||
|
|
||||||
|
sessions->DoNextInnerPacket(network_time, 0, inner, e, ec);
|
||||||
|
}
|
79
src/Teredo.h
Normal file
79
src/Teredo.h
Normal file
|
@ -0,0 +1,79 @@
|
||||||
|
#ifndef Teredo_h
|
||||||
|
#define Teredo_h
|
||||||
|
|
||||||
|
#include "Analyzer.h"
|
||||||
|
#include "NetVar.h"
|
||||||
|
|
||||||
|
class Teredo_Analyzer : public Analyzer {
|
||||||
|
public:
|
||||||
|
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn)
|
||||||
|
{}
|
||||||
|
|
||||||
|
virtual ~Teredo_Analyzer()
|
||||||
|
{}
|
||||||
|
|
||||||
|
virtual void Done();
|
||||||
|
|
||||||
|
virtual void DeliverPacket(int len, const u_char* data, bool orig,
|
||||||
|
int seq, const IP_Hdr* ip, int caplen);
|
||||||
|
|
||||||
|
static Analyzer* InstantiateAnalyzer(Connection* conn)
|
||||||
|
{ return new Teredo_Analyzer(conn); }
|
||||||
|
|
||||||
|
static bool Available()
|
||||||
|
{ return BifConst::Tunnel::enable_teredo &&
|
||||||
|
BifConst::Tunnel::max_depth > 0; }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Emits a weird only if the analyzer has previously been able to
|
||||||
|
* decapsulate a Teredo packet since otherwise the weirds could happen
|
||||||
|
* frequently enough to be less than helpful.
|
||||||
|
*/
|
||||||
|
void Weird(const char* name) const
|
||||||
|
{
|
||||||
|
if ( ProtocolConfirmed() )
|
||||||
|
reporter->Weird(Conn(), name);
|
||||||
|
}
|
||||||
|
|
||||||
|
protected:
|
||||||
|
friend class AnalyzerTimer;
|
||||||
|
void ExpireTimer(double t);
|
||||||
|
};
|
||||||
|
|
||||||
|
class TeredoEncapsulation {
|
||||||
|
public:
|
||||||
|
TeredoEncapsulation(const Teredo_Analyzer* ta)
|
||||||
|
: inner_ip(0), origin_indication(0), auth(0), analyzer(ta)
|
||||||
|
{}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns whether input data parsed as a valid Teredo encapsulation type.
|
||||||
|
* If it was valid, the len argument is decremented appropriately.
|
||||||
|
*/
|
||||||
|
bool Parse(const u_char* data, int& len)
|
||||||
|
{ return DoParse(data, len, false, false); }
|
||||||
|
|
||||||
|
const u_char* InnerIP() const
|
||||||
|
{ return inner_ip; }
|
||||||
|
|
||||||
|
const u_char* OriginIndication() const
|
||||||
|
{ return origin_indication; }
|
||||||
|
|
||||||
|
const u_char* Authentication() const
|
||||||
|
{ return auth; }
|
||||||
|
|
||||||
|
RecordVal* BuildVal(const IP_Hdr* inner) const;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
bool DoParse(const u_char* data, int& len, bool found_orig, bool found_au);
|
||||||
|
|
||||||
|
void Weird(const char* name) const
|
||||||
|
{ analyzer->Weird(name); }
|
||||||
|
|
||||||
|
const u_char* inner_ip;
|
||||||
|
const u_char* origin_indication;
|
||||||
|
const u_char* auth;
|
||||||
|
const Teredo_Analyzer* analyzer;
|
||||||
|
};
|
||||||
|
|
||||||
|
#endif
|
|
@ -20,6 +20,7 @@ const char* TimerNames[] = {
|
||||||
"IncrementalSendTimer",
|
"IncrementalSendTimer",
|
||||||
"IncrementalWriteTimer",
|
"IncrementalWriteTimer",
|
||||||
"InterconnTimer",
|
"InterconnTimer",
|
||||||
|
"IPTunnelInactivityTimer",
|
||||||
"NetbiosExpireTimer",
|
"NetbiosExpireTimer",
|
||||||
"NetworkTimer",
|
"NetworkTimer",
|
||||||
"NTPExpireTimer",
|
"NTPExpireTimer",
|
||||||
|
|
|
@ -26,6 +26,7 @@ enum TimerType {
|
||||||
TIMER_INCREMENTAL_SEND,
|
TIMER_INCREMENTAL_SEND,
|
||||||
TIMER_INCREMENTAL_WRITE,
|
TIMER_INCREMENTAL_WRITE,
|
||||||
TIMER_INTERCONN,
|
TIMER_INTERCONN,
|
||||||
|
TIMER_IP_TUNNEL_INACTIVITY,
|
||||||
TIMER_NB_EXPIRE,
|
TIMER_NB_EXPIRE,
|
||||||
TIMER_NETWORK,
|
TIMER_NETWORK,
|
||||||
TIMER_NTP_EXPIRE,
|
TIMER_NTP_EXPIRE,
|
||||||
|
|
55
src/TunnelEncapsulation.cc
Normal file
55
src/TunnelEncapsulation.cc
Normal file
|
@ -0,0 +1,55 @@
|
||||||
|
// See the file "COPYING" in the main distribution directory for copyright.
|
||||||
|
|
||||||
|
#include "TunnelEncapsulation.h"
|
||||||
|
#include "util.h"
|
||||||
|
#include "Conn.h"
|
||||||
|
|
||||||
|
EncapsulatingConn::EncapsulatingConn(Connection* c, BifEnum::Tunnel::Type t)
|
||||||
|
: src_addr(c->OrigAddr()), dst_addr(c->RespAddr()),
|
||||||
|
src_port(c->OrigPort()), dst_port(c->RespPort()),
|
||||||
|
proto(c->ConnTransport()), type(t), uid(c->GetUID())
|
||||||
|
{
|
||||||
|
if ( ! uid )
|
||||||
|
{
|
||||||
|
uid = calculate_unique_id();
|
||||||
|
c->SetUID(uid);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
RecordVal* EncapsulatingConn::GetRecordVal() const
|
||||||
|
{
|
||||||
|
RecordVal *rv = new RecordVal(BifType::Record::Tunnel::EncapsulatingConn);
|
||||||
|
|
||||||
|
RecordVal* id_val = new RecordVal(conn_id);
|
||||||
|
id_val->Assign(0, new AddrVal(src_addr));
|
||||||
|
id_val->Assign(1, new PortVal(ntohs(src_port), proto));
|
||||||
|
id_val->Assign(2, new AddrVal(dst_addr));
|
||||||
|
id_val->Assign(3, new PortVal(ntohs(dst_port), proto));
|
||||||
|
rv->Assign(0, id_val);
|
||||||
|
rv->Assign(1, new EnumVal(type, BifType::Enum::Tunnel::Type));
|
||||||
|
|
||||||
|
char tmp[20];
|
||||||
|
rv->Assign(2, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62)));
|
||||||
|
|
||||||
|
return rv;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool operator==(const EncapsulationStack& e1, const EncapsulationStack& e2)
|
||||||
|
{
|
||||||
|
if ( ! e1.conns )
|
||||||
|
return e2.conns;
|
||||||
|
|
||||||
|
if ( ! e2.conns )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if ( e1.conns->size() != e2.conns->size() )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for ( size_t i = 0; i < e1.conns->size(); ++i )
|
||||||
|
{
|
||||||
|
if ( (*e1.conns)[i] != (*e2.conns)[i] )
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
208
src/TunnelEncapsulation.h
Normal file
208
src/TunnelEncapsulation.h
Normal file
|
@ -0,0 +1,208 @@
|
||||||
|
// See the file "COPYING" in the main distribution directory for copyright.
|
||||||
|
|
||||||
|
#ifndef TUNNELS_H
|
||||||
|
#define TUNNELS_H
|
||||||
|
|
||||||
|
#include "config.h"
|
||||||
|
#include "NetVar.h"
|
||||||
|
#include "IPAddr.h"
|
||||||
|
#include "Val.h"
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
class Connection;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Represents various types of tunnel "connections", that is, a pair of
|
||||||
|
* endpoints whose communication encapsulates inner IP packets. This could
|
||||||
|
* mean IP packets nested inside IP packets or IP packets nested inside a
|
||||||
|
* transport layer protocol. EncapsulatingConn's are assigned a UID, which can
|
||||||
|
* be shared with Connection's in the case the tunnel uses a transport-layer.
|
||||||
|
*/
|
||||||
|
class EncapsulatingConn {
|
||||||
|
public:
|
||||||
|
/**
|
||||||
|
* Default tunnel connection constructor.
|
||||||
|
*/
|
||||||
|
EncapsulatingConn()
|
||||||
|
: src_port(0), dst_port(0), proto(TRANSPORT_UNKNOWN),
|
||||||
|
type(BifEnum::Tunnel::NONE), uid(0)
|
||||||
|
{}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Construct an IP tunnel "connection" with its own UID.
|
||||||
|
* The assignment of "source" and "destination" addresses here can be
|
||||||
|
* arbitrary, comparison between EncapsulatingConn objects will treat IP
|
||||||
|
* tunnels as equivalent as long as the same two endpoints are involved.
|
||||||
|
*
|
||||||
|
* @param s The tunnel source address, likely taken from an IP header.
|
||||||
|
* @param d The tunnel destination address, likely taken from an IP header.
|
||||||
|
*/
|
||||||
|
EncapsulatingConn(const IPAddr& s, const IPAddr& d)
|
||||||
|
: src_addr(s), dst_addr(d), src_port(0), dst_port(0),
|
||||||
|
proto(TRANSPORT_UNKNOWN), type(BifEnum::Tunnel::IP)
|
||||||
|
{
|
||||||
|
uid = calculate_unique_id();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Construct a tunnel connection using information from an already existing
|
||||||
|
* transport-layer-aware connection object.
|
||||||
|
*
|
||||||
|
* @param c The connection from which endpoint information can be extracted.
|
||||||
|
* If it already has a UID associated with it, that gets inherited,
|
||||||
|
* otherwise a new UID is created for this tunnel and \a c.
|
||||||
|
* @param t The type of tunneling that is occurring over the connection.
|
||||||
|
*/
|
||||||
|
EncapsulatingConn(Connection* c, BifEnum::Tunnel::Type t);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Copy constructor.
|
||||||
|
*/
|
||||||
|
EncapsulatingConn(const EncapsulatingConn& other)
|
||||||
|
: src_addr(other.src_addr), dst_addr(other.dst_addr),
|
||||||
|
src_port(other.src_port), dst_port(other.dst_port),
|
||||||
|
proto(other.proto), type(other.type), uid(other.uid)
|
||||||
|
{}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Destructor.
|
||||||
|
*/
|
||||||
|
~EncapsulatingConn()
|
||||||
|
{}
|
||||||
|
|
||||||
|
BifEnum::Tunnel::Type Type() const
|
||||||
|
{ return type; }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns record value of type "EncapsulatingConn" representing the tunnel.
|
||||||
|
*/
|
||||||
|
RecordVal* GetRecordVal() const;
|
||||||
|
|
||||||
|
friend bool operator==(const EncapsulatingConn& ec1,
|
||||||
|
const EncapsulatingConn& ec2)
|
||||||
|
{
|
||||||
|
if ( ec1.type != ec2.type )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if ( ec1.type == BifEnum::Tunnel::IP )
|
||||||
|
// Reversing endpoints is still same tunnel.
|
||||||
|
return ec1.uid == ec2.uid && ec1.proto == ec2.proto &&
|
||||||
|
((ec1.src_addr == ec2.src_addr && ec1.dst_addr == ec2.dst_addr) ||
|
||||||
|
(ec1.src_addr == ec2.dst_addr && ec1.dst_addr == ec2.src_addr));
|
||||||
|
|
||||||
|
return ec1.src_addr == ec2.src_addr && ec1.dst_addr == ec2.dst_addr &&
|
||||||
|
ec1.src_port == ec2.src_port && ec1.dst_port == ec2.dst_port &&
|
||||||
|
ec1.uid == ec2.uid && ec1.proto == ec2.proto;
|
||||||
|
}
|
||||||
|
|
||||||
|
friend bool operator!=(const EncapsulatingConn& ec1,
|
||||||
|
const EncapsulatingConn& ec2)
|
||||||
|
{
|
||||||
|
return ! ( ec1 == ec2 );
|
||||||
|
}
|
||||||
|
|
||||||
|
protected:
|
||||||
|
IPAddr src_addr;
|
||||||
|
IPAddr dst_addr;
|
||||||
|
uint16 src_port;
|
||||||
|
uint16 dst_port;
|
||||||
|
TransportProto proto;
|
||||||
|
BifEnum::Tunnel::Type type;
|
||||||
|
uint64 uid;
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Abstracts an arbitrary amount of nested tunneling.
|
||||||
|
*/
|
||||||
|
class EncapsulationStack {
|
||||||
|
public:
|
||||||
|
EncapsulationStack() : conns(0)
|
||||||
|
{}
|
||||||
|
|
||||||
|
EncapsulationStack(const EncapsulationStack& other)
|
||||||
|
{
|
||||||
|
if ( other.conns )
|
||||||
|
conns = new vector<EncapsulatingConn>(*(other.conns));
|
||||||
|
else
|
||||||
|
conns = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
EncapsulationStack& operator=(const EncapsulationStack& other)
|
||||||
|
{
|
||||||
|
if ( this == &other )
|
||||||
|
return *this;
|
||||||
|
|
||||||
|
delete conns;
|
||||||
|
|
||||||
|
if ( other.conns )
|
||||||
|
conns = new vector<EncapsulatingConn>(*(other.conns));
|
||||||
|
else
|
||||||
|
conns = 0;
|
||||||
|
|
||||||
|
return *this;
|
||||||
|
}
|
||||||
|
|
||||||
|
~EncapsulationStack() { delete conns; }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Add a new inner-most tunnel to the EncapsulationStack.
|
||||||
|
*
|
||||||
|
* @param c The new inner-most tunnel to append to the tunnel chain.
|
||||||
|
*/
|
||||||
|
void Add(const EncapsulatingConn& c)
|
||||||
|
{
|
||||||
|
if ( ! conns )
|
||||||
|
conns = new vector<EncapsulatingConn>();
|
||||||
|
|
||||||
|
conns->push_back(c);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Return how many nested tunnels are involved in a encapsulation, zero
|
||||||
|
* meaning no tunnels are present.
|
||||||
|
*/
|
||||||
|
size_t Depth() const
|
||||||
|
{
|
||||||
|
return conns ? conns->size() : 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Return the tunnel type of the inner-most tunnel.
|
||||||
|
*/
|
||||||
|
BifEnum::Tunnel::Type LastType() const
|
||||||
|
{
|
||||||
|
return conns ? (*conns)[conns->size()-1].Type() : BifEnum::Tunnel::NONE;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the value of type "EncapsulatingConnVector" represented by the
|
||||||
|
* entire encapsulation chain.
|
||||||
|
*/
|
||||||
|
VectorVal* GetVectorVal() const
|
||||||
|
{
|
||||||
|
VectorVal* vv = new VectorVal(
|
||||||
|
internal_type("EncapsulatingConnVector")->AsVectorType());
|
||||||
|
|
||||||
|
if ( conns )
|
||||||
|
{
|
||||||
|
for ( size_t i = 0; i < conns->size(); ++i )
|
||||||
|
vv->Assign(i, (*conns)[i].GetRecordVal(), 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
return vv;
|
||||||
|
}
|
||||||
|
|
||||||
|
friend bool operator==(const EncapsulationStack& e1,
|
||||||
|
const EncapsulationStack& e2);
|
||||||
|
|
||||||
|
friend bool operator!=(const EncapsulationStack& e1,
|
||||||
|
const EncapsulationStack& e2)
|
||||||
|
{
|
||||||
|
return ! ( e1 == e2 );
|
||||||
|
}
|
||||||
|
|
||||||
|
protected:
|
||||||
|
vector<EncapsulatingConn>* conns;
|
||||||
|
};
|
||||||
|
|
||||||
|
#endif
|
|
@ -1651,6 +1651,7 @@ int TableVal::RemoveFrom(Val* val) const
|
||||||
while ( (v = tbl->NextEntry(k, c)) )
|
while ( (v = tbl->NextEntry(k, c)) )
|
||||||
{
|
{
|
||||||
Val* index = RecoverIndex(k);
|
Val* index = RecoverIndex(k);
|
||||||
|
|
||||||
Unref(index);
|
Unref(index);
|
||||||
Unref(t->Delete(k));
|
Unref(t->Delete(k));
|
||||||
delete k;
|
delete k;
|
||||||
|
|
89
src/ayiya-analyzer.pac
Normal file
89
src/ayiya-analyzer.pac
Normal file
|
@ -0,0 +1,89 @@
|
||||||
|
|
||||||
|
connection AYIYA_Conn(bro_analyzer: BroAnalyzer)
|
||||||
|
{
|
||||||
|
upflow = AYIYA_Flow;
|
||||||
|
downflow = AYIYA_Flow;
|
||||||
|
};
|
||||||
|
|
||||||
|
flow AYIYA_Flow
|
||||||
|
{
|
||||||
|
datagram = PDU withcontext(connection, this);
|
||||||
|
|
||||||
|
function process_ayiya(pdu: PDU): bool
|
||||||
|
%{
|
||||||
|
Connection *c = connection()->bro_analyzer()->Conn();
|
||||||
|
const EncapsulationStack* e = c->GetEncapsulation();
|
||||||
|
|
||||||
|
if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
|
||||||
|
{
|
||||||
|
reporter->Weird(c, "tunnel_depth");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( ${pdu.op} != 1 )
|
||||||
|
{
|
||||||
|
// 1 is the "forward" command.
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( ${pdu.next_header} != IPPROTO_IPV6 &&
|
||||||
|
${pdu.next_header} != IPPROTO_IPV4 )
|
||||||
|
{
|
||||||
|
reporter->Weird(c, "ayiya_tunnel_non_ip");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( ${pdu.packet}.length() < (int)sizeof(struct ip) )
|
||||||
|
{
|
||||||
|
connection()->bro_analyzer()->ProtocolViolation(
|
||||||
|
"Truncated AYIYA", (const char*) ${pdu.packet}.data(),
|
||||||
|
${pdu.packet}.length());
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const struct ip* ip = (const struct ip*) ${pdu.packet}.data();
|
||||||
|
|
||||||
|
if ( ( ${pdu.next_header} == IPPROTO_IPV6 && ip->ip_v != 6 ) ||
|
||||||
|
( ${pdu.next_header} == IPPROTO_IPV4 && ip->ip_v != 4) )
|
||||||
|
{
|
||||||
|
connection()->bro_analyzer()->ProtocolViolation(
|
||||||
|
"AYIYA next header mismatch", (const char*)${pdu.packet}.data(),
|
||||||
|
${pdu.packet}.length());
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
IP_Hdr* inner = 0;
|
||||||
|
int result = sessions->ParseIPPacket(${pdu.packet}.length(),
|
||||||
|
${pdu.packet}.data(), ${pdu.next_header}, inner);
|
||||||
|
|
||||||
|
if ( result == 0 )
|
||||||
|
connection()->bro_analyzer()->ProtocolConfirmation();
|
||||||
|
|
||||||
|
else if ( result < 0 )
|
||||||
|
connection()->bro_analyzer()->ProtocolViolation(
|
||||||
|
"Truncated AYIYA", (const char*) ${pdu.packet}.data(),
|
||||||
|
${pdu.packet}.length());
|
||||||
|
|
||||||
|
else
|
||||||
|
connection()->bro_analyzer()->ProtocolViolation(
|
||||||
|
"AYIYA payload length", (const char*) ${pdu.packet}.data(),
|
||||||
|
${pdu.packet}.length());
|
||||||
|
|
||||||
|
if ( result != 0 )
|
||||||
|
{
|
||||||
|
delete inner;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
EncapsulatingConn ec(c, BifEnum::Tunnel::AYIYA);
|
||||||
|
|
||||||
|
sessions->DoNextInnerPacket(network_time(), 0, inner, e, ec);
|
||||||
|
|
||||||
|
return (result == 0) ? true : false;
|
||||||
|
%}
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
refine typeattr PDU += &let {
|
||||||
|
proc_ayiya = $context.flow.process_ayiya(this);
|
||||||
|
};
|
16
src/ayiya-protocol.pac
Normal file
16
src/ayiya-protocol.pac
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
|
||||||
|
type PDU = record {
|
||||||
|
identity_byte: uint8;
|
||||||
|
signature_byte: uint8;
|
||||||
|
auth_and_op: uint8;
|
||||||
|
next_header: uint8;
|
||||||
|
epoch: uint32;
|
||||||
|
identity: bytestring &length=identity_len;
|
||||||
|
signature: bytestring &length=signature_len;
|
||||||
|
packet: bytestring &restofdata;
|
||||||
|
} &let {
|
||||||
|
identity_len = (1 << (identity_byte >> 4));
|
||||||
|
signature_len = (signature_byte >> 4) * 4;
|
||||||
|
auth = auth_and_op >> 4;
|
||||||
|
op = auth_and_op & 0xF;
|
||||||
|
} &byteorder = littleendian;
|
10
src/ayiya.pac
Normal file
10
src/ayiya.pac
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
%include binpac.pac
|
||||||
|
%include bro.pac
|
||||||
|
|
||||||
|
analyzer AYIYA withcontext {
|
||||||
|
connection: AYIYA_Conn;
|
||||||
|
flow: AYIYA_Flow;
|
||||||
|
};
|
||||||
|
|
||||||
|
%include ayiya-protocol.pac
|
||||||
|
%include ayiya-analyzer.pac
|
|
@ -4814,7 +4814,9 @@ function calc_next_rotate%(i: interval%) : interval
|
||||||
%{
|
%{
|
||||||
const char* base_time = log_rotate_base_time ?
|
const char* base_time = log_rotate_base_time ?
|
||||||
log_rotate_base_time->AsString()->CheckString() : 0;
|
log_rotate_base_time->AsString()->CheckString() : 0;
|
||||||
return new Val(calc_next_rotate(i, base_time), TYPE_INTERVAL);
|
|
||||||
|
double base = parse_rotate_base_time(base_time);
|
||||||
|
return new Val(calc_next_rotate(network_time, i, base), TYPE_INTERVAL);
|
||||||
%}
|
%}
|
||||||
|
|
||||||
## Returns the size of a given file.
|
## Returns the size of a given file.
|
||||||
|
|
|
@ -4,7 +4,6 @@
|
||||||
|
|
||||||
const ignore_keep_alive_rexmit: bool;
|
const ignore_keep_alive_rexmit: bool;
|
||||||
const skip_http_data: bool;
|
const skip_http_data: bool;
|
||||||
const parse_udp_tunnels: bool;
|
|
||||||
const use_conn_size_analyzer: bool;
|
const use_conn_size_analyzer: bool;
|
||||||
const report_gaps_for_partial: bool;
|
const report_gaps_for_partial: bool;
|
||||||
|
|
||||||
|
@ -12,4 +11,11 @@ const NFS3::return_data: bool;
|
||||||
const NFS3::return_data_max: count;
|
const NFS3::return_data_max: count;
|
||||||
const NFS3::return_data_first_only: bool;
|
const NFS3::return_data_first_only: bool;
|
||||||
|
|
||||||
|
const Tunnel::max_depth: count;
|
||||||
|
const Tunnel::enable_ip: bool;
|
||||||
|
const Tunnel::enable_ayiya: bool;
|
||||||
|
const Tunnel::enable_teredo: bool;
|
||||||
|
const Tunnel::yielding_teredo_decapsulation: bool;
|
||||||
|
const Tunnel::ip_tunnel_timeout: interval;
|
||||||
|
|
||||||
const Threading::heartbeat_interval: interval;
|
const Threading::heartbeat_interval: interval;
|
||||||
|
|
2126
src/event.bif
2126
src/event.bif
File diff suppressed because it is too large
Load diff
|
@ -71,7 +71,7 @@ declare(PDict, InputHash);
|
||||||
class Manager::Stream {
|
class Manager::Stream {
|
||||||
public:
|
public:
|
||||||
string name;
|
string name;
|
||||||
string source;
|
ReaderBackend::ReaderInfo info;
|
||||||
bool removed;
|
bool removed;
|
||||||
|
|
||||||
ReaderMode mode;
|
ReaderMode mode;
|
||||||
|
@ -80,6 +80,7 @@ public:
|
||||||
|
|
||||||
EnumVal* type;
|
EnumVal* type;
|
||||||
ReaderFrontend* reader;
|
ReaderFrontend* reader;
|
||||||
|
TableVal* config;
|
||||||
|
|
||||||
RecordVal* description;
|
RecordVal* description;
|
||||||
|
|
||||||
|
@ -103,6 +104,9 @@ Manager::Stream::~Stream()
|
||||||
if ( description )
|
if ( description )
|
||||||
Unref(description);
|
Unref(description);
|
||||||
|
|
||||||
|
if ( config )
|
||||||
|
Unref(config);
|
||||||
|
|
||||||
if ( reader )
|
if ( reader )
|
||||||
delete(reader);
|
delete(reader);
|
||||||
}
|
}
|
||||||
|
@ -300,6 +304,7 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
|
||||||
Unref(sourceval);
|
Unref(sourceval);
|
||||||
|
|
||||||
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal();
|
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal();
|
||||||
|
Val* config = description->LookupWithDefault(rtype->FieldOffset("config"));
|
||||||
|
|
||||||
switch ( mode->InternalInt() )
|
switch ( mode->InternalInt() )
|
||||||
{
|
{
|
||||||
|
@ -324,10 +329,34 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
|
||||||
info->reader = reader_obj;
|
info->reader = reader_obj;
|
||||||
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
|
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
|
||||||
info->name = name;
|
info->name = name;
|
||||||
info->source = source;
|
info->config = config->AsTableVal(); // ref'd by LookupWithDefault
|
||||||
|
|
||||||
|
ReaderBackend::ReaderInfo readerinfo;
|
||||||
|
readerinfo.source = source;
|
||||||
|
|
||||||
Ref(description);
|
Ref(description);
|
||||||
info->description = description;
|
info->description = description;
|
||||||
|
|
||||||
|
{
|
||||||
|
HashKey* k;
|
||||||
|
IterCookie* c = info->config->AsTable()->InitForIteration();
|
||||||
|
|
||||||
|
TableEntryVal* v;
|
||||||
|
while ( (v = info->config->AsTable()->NextEntry(k, c)) )
|
||||||
|
{
|
||||||
|
ListVal* index = info->config->RecoverIndex(k);
|
||||||
|
string key = index->Index(0)->AsString()->CheckString();
|
||||||
|
string value = v->Value()->AsString()->CheckString();
|
||||||
|
info->info.config.insert(std::make_pair(key, value));
|
||||||
|
Unref(index);
|
||||||
|
delete k;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
info->info = readerinfo;
|
||||||
|
|
||||||
|
|
||||||
DBG_LOG(DBG_INPUT, "Successfully created new input stream %s",
|
DBG_LOG(DBG_INPUT, "Successfully created new input stream %s",
|
||||||
name.c_str());
|
name.c_str());
|
||||||
|
|
||||||
|
@ -451,7 +480,8 @@ bool Manager::CreateEventStream(RecordVal* fval)
|
||||||
Unref(want_record); // ref'd by lookupwithdefault
|
Unref(want_record); // ref'd by lookupwithdefault
|
||||||
|
|
||||||
assert(stream->reader);
|
assert(stream->reader);
|
||||||
stream->reader->Init(stream->source, stream->mode, stream->num_fields, logf );
|
|
||||||
|
stream->reader->Init(stream->info, stream->mode, stream->num_fields, logf );
|
||||||
|
|
||||||
readers[stream->reader] = stream;
|
readers[stream->reader] = stream;
|
||||||
|
|
||||||
|
@ -628,7 +658,7 @@ bool Manager::CreateTableStream(RecordVal* fval)
|
||||||
|
|
||||||
|
|
||||||
assert(stream->reader);
|
assert(stream->reader);
|
||||||
stream->reader->Init(stream->source, stream->mode, fieldsV.size(), fields );
|
stream->reader->Init(stream->info, stream->mode, fieldsV.size(), fields );
|
||||||
|
|
||||||
readers[stream->reader] = stream;
|
readers[stream->reader] = stream;
|
||||||
|
|
||||||
|
@ -689,31 +719,39 @@ bool Manager::IsCompatibleType(BroType* t, bool atomic_only)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
bool Manager::RemoveStream(const string &name)
|
bool Manager::RemoveStream(Stream *i)
|
||||||
{
|
{
|
||||||
Stream *i = FindStream(name);
|
|
||||||
|
|
||||||
if ( i == 0 )
|
if ( i == 0 )
|
||||||
return false; // not found
|
return false; // not found
|
||||||
|
|
||||||
if ( i->removed )
|
if ( i->removed )
|
||||||
{
|
{
|
||||||
reporter->Error("Stream %s is already queued for removal. Ignoring remove.", name.c_str());
|
reporter->Warning("Stream %s is already queued for removal. Ignoring remove.", i->name.c_str());
|
||||||
return false;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
i->removed = true;
|
i->removed = true;
|
||||||
|
|
||||||
i->reader->Close();
|
i->reader->Close();
|
||||||
|
|
||||||
#ifdef DEBUG
|
|
||||||
DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s",
|
DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s",
|
||||||
name.c_str());
|
i->name.c_str());
|
||||||
#endif
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool Manager::RemoveStream(ReaderFrontend* frontend)
|
||||||
|
{
|
||||||
|
return RemoveStream(FindStream(frontend));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
bool Manager::RemoveStream(const string &name)
|
||||||
|
{
|
||||||
|
return RemoveStream(FindStream(name));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
bool Manager::RemoveStreamContinuation(ReaderFrontend* reader)
|
bool Manager::RemoveStreamContinuation(ReaderFrontend* reader)
|
||||||
{
|
{
|
||||||
Stream *i = FindStream(reader);
|
Stream *i = FindStream(reader);
|
||||||
|
@ -1200,7 +1238,7 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Send event that the current update is indeed finished.
|
// Send event that the current update is indeed finished.
|
||||||
SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->source.c_str()));
|
SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->info.source.c_str()));
|
||||||
}
|
}
|
||||||
|
|
||||||
void Manager::Put(ReaderFrontend* reader, Value* *vals)
|
void Manager::Put(ReaderFrontend* reader, Value* *vals)
|
||||||
|
|
|
@ -72,7 +72,7 @@ public:
|
||||||
/**
|
/**
|
||||||
* Deletes an existing input stream.
|
* Deletes an existing input stream.
|
||||||
*
|
*
|
||||||
* @param id The enum value corresponding the input stream.
|
* @param id The name of the input stream to be removed.
|
||||||
*
|
*
|
||||||
* This method corresponds directly to the internal BiF defined in
|
* This method corresponds directly to the internal BiF defined in
|
||||||
* input.bif, which just forwards here.
|
* input.bif, which just forwards here.
|
||||||
|
@ -88,6 +88,7 @@ protected:
|
||||||
friend class SendEntryMessage;
|
friend class SendEntryMessage;
|
||||||
friend class EndCurrentSendMessage;
|
friend class EndCurrentSendMessage;
|
||||||
friend class ReaderClosedMessage;
|
friend class ReaderClosedMessage;
|
||||||
|
friend class DisableMessage;
|
||||||
|
|
||||||
// For readers to write to input stream in direct mode (reporting
|
// For readers to write to input stream in direct mode (reporting
|
||||||
// new/deleted values directly). Functions take ownership of
|
// new/deleted values directly). Functions take ownership of
|
||||||
|
@ -119,11 +120,25 @@ protected:
|
||||||
// stream is still received.
|
// stream is still received.
|
||||||
bool RemoveStreamContinuation(ReaderFrontend* reader);
|
bool RemoveStreamContinuation(ReaderFrontend* reader);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Deletes an existing input stream.
|
||||||
|
*
|
||||||
|
* @param frontend pointer to the frontend of the input stream to be removed.
|
||||||
|
*
|
||||||
|
* This method is used by the reader backends to remove a reader when it fails
|
||||||
|
* for some reason.
|
||||||
|
*/
|
||||||
|
bool RemoveStream(ReaderFrontend* frontend);
|
||||||
|
|
||||||
private:
|
private:
|
||||||
class Stream;
|
class Stream;
|
||||||
class TableStream;
|
class TableStream;
|
||||||
class EventStream;
|
class EventStream;
|
||||||
|
|
||||||
|
// Actual RemoveStream implementation -- the function's public and
|
||||||
|
// protected definitions are wrappers around this function.
|
||||||
|
bool RemoveStream(Stream* i);
|
||||||
|
|
||||||
bool CreateStream(Stream*, RecordVal* description);
|
bool CreateStream(Stream*, RecordVal* description);
|
||||||
|
|
||||||
// SendEntry implementation for Table stream.
|
// SendEntry implementation for Table stream.
|
||||||
|
|
|
@ -113,6 +113,7 @@ public:
|
||||||
|
|
||||||
virtual bool Process()
|
virtual bool Process()
|
||||||
{
|
{
|
||||||
|
Object()->SetDisable();
|
||||||
return input_mgr->RemoveStreamContinuation(Object());
|
return input_mgr->RemoveStreamContinuation(Object());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -129,10 +130,59 @@ public:
|
||||||
virtual bool Process()
|
virtual bool Process()
|
||||||
{
|
{
|
||||||
Object()->SetDisable();
|
Object()->SetDisable();
|
||||||
|
// And - because we do not need disabled objects any more -
|
||||||
|
// there is no way to re-enable them, so simply delete them.
|
||||||
|
// This avoids the problem of having to periodically check if
|
||||||
|
// there are any disabled readers out there. As soon as a
|
||||||
|
// reader disables itself, it deletes itself.
|
||||||
|
input_mgr->RemoveStream(Object());
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
using namespace logging;
|
||||||
|
|
||||||
|
bool ReaderBackend::ReaderInfo::Read(SerializationFormat* fmt)
|
||||||
|
{
|
||||||
|
int size;
|
||||||
|
|
||||||
|
if ( ! (fmt->Read(&source, "source") &&
|
||||||
|
fmt->Read(&size, "config_size")) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
config.clear();
|
||||||
|
|
||||||
|
while ( size )
|
||||||
|
{
|
||||||
|
string value;
|
||||||
|
string key;
|
||||||
|
|
||||||
|
if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
config.insert(std::make_pair(value, key));
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
bool ReaderBackend::ReaderInfo::Write(SerializationFormat* fmt) const
|
||||||
|
{
|
||||||
|
int size = config.size();
|
||||||
|
|
||||||
|
if ( ! (fmt->Write(source, "source") &&
|
||||||
|
fmt->Write(size, "config_size")) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for ( config_map::const_iterator i = config.begin(); i != config.end(); ++i )
|
||||||
|
{
|
||||||
|
if ( ! (fmt->Write(i->first, "config-value") && fmt->Write(i->second, "config-key")) )
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread()
|
ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread()
|
||||||
{
|
{
|
||||||
|
@ -176,18 +226,18 @@ void ReaderBackend::SendEntry(Value* *vals)
|
||||||
SendOut(new SendEntryMessage(frontend, vals));
|
SendOut(new SendEntryMessage(frontend, vals));
|
||||||
}
|
}
|
||||||
|
|
||||||
bool ReaderBackend::Init(string arg_source, ReaderMode arg_mode, const int arg_num_fields,
|
bool ReaderBackend::Init(const ReaderInfo& arg_info, ReaderMode arg_mode, const int arg_num_fields,
|
||||||
const threading::Field* const* arg_fields)
|
const threading::Field* const* arg_fields)
|
||||||
{
|
{
|
||||||
source = arg_source;
|
info = arg_info;
|
||||||
mode = arg_mode;
|
mode = arg_mode;
|
||||||
num_fields = arg_num_fields;
|
num_fields = arg_num_fields;
|
||||||
fields = arg_fields;
|
fields = arg_fields;
|
||||||
|
|
||||||
SetName("InputReader/"+source);
|
SetName("InputReader/"+info.source);
|
||||||
|
|
||||||
// disable if DoInit returns error.
|
// disable if DoInit returns error.
|
||||||
int success = DoInit(arg_source, mode, arg_num_fields, arg_fields);
|
int success = DoInit(arg_info, mode, arg_num_fields, arg_fields);
|
||||||
|
|
||||||
if ( ! success )
|
if ( ! success )
|
||||||
{
|
{
|
||||||
|
@ -203,8 +253,7 @@ bool ReaderBackend::Init(string arg_source, ReaderMode arg_mode, const int arg_n
|
||||||
void ReaderBackend::Close()
|
void ReaderBackend::Close()
|
||||||
{
|
{
|
||||||
DoClose();
|
DoClose();
|
||||||
disabled = true;
|
disabled = true; // frontend disables itself when it gets the Close-message.
|
||||||
DisableFrontend();
|
|
||||||
SendOut(new ReaderClosedMessage(frontend));
|
SendOut(new ReaderClosedMessage(frontend));
|
||||||
|
|
||||||
if ( fields != 0 )
|
if ( fields != 0 )
|
||||||
|
|
|
@ -7,6 +7,8 @@
|
||||||
|
|
||||||
#include "threading/SerialTypes.h"
|
#include "threading/SerialTypes.h"
|
||||||
#include "threading/MsgThread.h"
|
#include "threading/MsgThread.h"
|
||||||
|
class RemoteSerializer;
|
||||||
|
|
||||||
|
|
||||||
namespace input {
|
namespace input {
|
||||||
|
|
||||||
|
@ -15,17 +17,24 @@ namespace input {
|
||||||
*/
|
*/
|
||||||
enum ReaderMode {
|
enum ReaderMode {
|
||||||
/**
|
/**
|
||||||
* TODO Bernhard.
|
* Manual refresh reader mode. The reader will read the file once,
|
||||||
|
* and send all read data back to the manager. After that, no automatic
|
||||||
|
* refresh should happen. Manual refreshes can be triggered from the
|
||||||
|
* scripting layer using force_update.
|
||||||
*/
|
*/
|
||||||
MODE_MANUAL,
|
MODE_MANUAL,
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* TODO Bernhard.
|
* Automatic rereading mode. The reader should monitor the
|
||||||
|
* data source for changes continually. When the data source changes,
|
||||||
|
* either the whole file has to be resent using the SendEntry/EndCurrentSend functions.
|
||||||
*/
|
*/
|
||||||
MODE_REREAD,
|
MODE_REREAD,
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* TODO Bernhard.
|
* Streaming reading mode. The reader should monitor the data source
|
||||||
|
* for new appended data. When new data is appended is has to be sent
|
||||||
|
* using the Put api functions.
|
||||||
*/
|
*/
|
||||||
MODE_STREAM
|
MODE_STREAM
|
||||||
};
|
};
|
||||||
|
@ -58,6 +67,35 @@ public:
|
||||||
*/
|
*/
|
||||||
virtual ~ReaderBackend();
|
virtual ~ReaderBackend();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A struct passing information to the reader at initialization time.
|
||||||
|
*/
|
||||||
|
struct ReaderInfo
|
||||||
|
{
|
||||||
|
typedef std::map<string, string> config_map;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A string left to the interpretation of the reader
|
||||||
|
* implementation; it corresponds to the value configured on
|
||||||
|
* the script-level for the logging filter.
|
||||||
|
*/
|
||||||
|
string source;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A map of key/value pairs corresponding to the relevant
|
||||||
|
* filter's "config" table.
|
||||||
|
*/
|
||||||
|
config_map config;
|
||||||
|
|
||||||
|
private:
|
||||||
|
friend class ::RemoteSerializer;
|
||||||
|
|
||||||
|
// Note, these need to be adapted when changing the struct's
|
||||||
|
// fields. They serialize/deserialize the struct.
|
||||||
|
bool Read(SerializationFormat* fmt);
|
||||||
|
bool Write(SerializationFormat* fmt) const;
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* One-time initialization of the reader to define the input source.
|
* One-time initialization of the reader to define the input source.
|
||||||
*
|
*
|
||||||
|
@ -72,9 +110,12 @@ public:
|
||||||
* @param fields The types and names of the fields to be retrieved
|
* @param fields The types and names of the fields to be retrieved
|
||||||
* from the input source.
|
* from the input source.
|
||||||
*
|
*
|
||||||
|
* @param config A string map containing additional configuration options
|
||||||
|
* for the reader.
|
||||||
|
*
|
||||||
* @return False if an error occured.
|
* @return False if an error occured.
|
||||||
*/
|
*/
|
||||||
bool Init(string source, ReaderMode mode, int num_fields, const threading::Field* const* fields);
|
bool Init(const ReaderInfo& info, ReaderMode mode, int num_fields, const threading::Field* const* fields);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Finishes reading from this input stream in a regular fashion. Must
|
* Finishes reading from this input stream in a regular fashion. Must
|
||||||
|
@ -102,6 +143,22 @@ public:
|
||||||
*/
|
*/
|
||||||
void DisableFrontend();
|
void DisableFrontend();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the log fields as passed into the constructor.
|
||||||
|
*/
|
||||||
|
const threading::Field* const * Fields() const { return fields; }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the additional reader information into the constructor.
|
||||||
|
*/
|
||||||
|
const ReaderInfo& Info() const { return info; }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the number of log fields as passed into the constructor.
|
||||||
|
*/
|
||||||
|
int NumFields() const { return num_fields; }
|
||||||
|
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
// Methods that have to be overwritten by the individual readers
|
// Methods that have to be overwritten by the individual readers
|
||||||
|
|
||||||
|
@ -123,7 +180,7 @@ protected:
|
||||||
* provides accessor methods to get them later, and they are passed
|
* provides accessor methods to get them later, and they are passed
|
||||||
* in here only for convinience.
|
* in here only for convinience.
|
||||||
*/
|
*/
|
||||||
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields) = 0;
|
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields) = 0;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Reader-specific method implementing input finalization at
|
* Reader-specific method implementing input finalization at
|
||||||
|
@ -152,26 +209,11 @@ protected:
|
||||||
*/
|
*/
|
||||||
virtual bool DoUpdate() = 0;
|
virtual bool DoUpdate() = 0;
|
||||||
|
|
||||||
/**
|
|
||||||
* Returns the input source as passed into Init()/.
|
|
||||||
*/
|
|
||||||
const string Source() const { return source; }
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the reader mode as passed into Init().
|
* Returns the reader mode as passed into Init().
|
||||||
*/
|
*/
|
||||||
const ReaderMode Mode() const { return mode; }
|
const ReaderMode Mode() const { return mode; }
|
||||||
|
|
||||||
/**
|
|
||||||
* Returns the number of log fields as passed into Init().
|
|
||||||
*/
|
|
||||||
unsigned int NumFields() const { return num_fields; }
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Returns the log fields as passed into Init().
|
|
||||||
*/
|
|
||||||
const threading::Field* const * Fields() const { return fields; }
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Method allowing a reader to send a specified Bro event. Vals must
|
* Method allowing a reader to send a specified Bro event. Vals must
|
||||||
* match the values expected by the bro event.
|
* match the values expected by the bro event.
|
||||||
|
@ -272,7 +314,7 @@ private:
|
||||||
// from this class, it's running in a different thread!
|
// from this class, it's running in a different thread!
|
||||||
ReaderFrontend* frontend;
|
ReaderFrontend* frontend;
|
||||||
|
|
||||||
string source;
|
ReaderInfo info;
|
||||||
ReaderMode mode;
|
ReaderMode mode;
|
||||||
unsigned int num_fields;
|
unsigned int num_fields;
|
||||||
const threading::Field* const * fields; // raw mapping
|
const threading::Field* const * fields; // raw mapping
|
||||||
|
|
|
@ -6,26 +6,23 @@
|
||||||
|
|
||||||
#include "threading/MsgThread.h"
|
#include "threading/MsgThread.h"
|
||||||
|
|
||||||
// FIXME: cleanup of disabled inputreaders is missing. we need this, because
|
|
||||||
// stuff can e.g. fail in init and might never be removed afterwards.
|
|
||||||
|
|
||||||
namespace input {
|
namespace input {
|
||||||
|
|
||||||
class InitMessage : public threading::InputMessage<ReaderBackend>
|
class InitMessage : public threading::InputMessage<ReaderBackend>
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
InitMessage(ReaderBackend* backend, const string source, ReaderMode mode,
|
InitMessage(ReaderBackend* backend, const ReaderBackend::ReaderInfo& info, ReaderMode mode,
|
||||||
const int num_fields, const threading::Field* const* fields)
|
const int num_fields, const threading::Field* const* fields)
|
||||||
: threading::InputMessage<ReaderBackend>("Init", backend),
|
: threading::InputMessage<ReaderBackend>("Init", backend),
|
||||||
source(source), mode(mode), num_fields(num_fields), fields(fields) { }
|
info(info), mode(mode), num_fields(num_fields), fields(fields) { }
|
||||||
|
|
||||||
virtual bool Process()
|
virtual bool Process()
|
||||||
{
|
{
|
||||||
return Object()->Init(source, mode, num_fields, fields);
|
return Object()->Init(info, mode, num_fields, fields);
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
const string source;
|
const ReaderBackend::ReaderInfo info;
|
||||||
const ReaderMode mode;
|
const ReaderMode mode;
|
||||||
const int num_fields;
|
const int num_fields;
|
||||||
const threading::Field* const* fields;
|
const threading::Field* const* fields;
|
||||||
|
@ -66,8 +63,8 @@ ReaderFrontend::~ReaderFrontend()
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void ReaderFrontend::Init(string arg_source, ReaderMode mode, const int num_fields,
|
void ReaderFrontend::Init(const ReaderBackend::ReaderInfo& arg_info, ReaderMode mode, const int arg_num_fields,
|
||||||
const threading::Field* const* fields)
|
const threading::Field* const* arg_fields)
|
||||||
{
|
{
|
||||||
if ( disabled )
|
if ( disabled )
|
||||||
return;
|
return;
|
||||||
|
@ -75,10 +72,12 @@ void ReaderFrontend::Init(string arg_source, ReaderMode mode, const int num_fiel
|
||||||
if ( initialized )
|
if ( initialized )
|
||||||
reporter->InternalError("reader initialize twice");
|
reporter->InternalError("reader initialize twice");
|
||||||
|
|
||||||
source = arg_source;
|
info = arg_info;
|
||||||
|
num_fields = arg_num_fields;
|
||||||
|
fields = arg_fields;
|
||||||
initialized = true;
|
initialized = true;
|
||||||
|
|
||||||
backend->SendIn(new InitMessage(backend, arg_source, mode, num_fields, fields));
|
backend->SendIn(new InitMessage(backend, info, mode, num_fields, fields));
|
||||||
}
|
}
|
||||||
|
|
||||||
void ReaderFrontend::Update()
|
void ReaderFrontend::Update()
|
||||||
|
@ -106,15 +105,16 @@ void ReaderFrontend::Close()
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
disabled = true;
|
||||||
backend->SendIn(new CloseMessage(backend));
|
backend->SendIn(new CloseMessage(backend));
|
||||||
}
|
}
|
||||||
|
|
||||||
string ReaderFrontend::Name() const
|
string ReaderFrontend::Name() const
|
||||||
{
|
{
|
||||||
if ( source.size() )
|
if ( info.source.size() )
|
||||||
return ty_name;
|
return ty_name;
|
||||||
|
|
||||||
return ty_name + "/" + source;
|
return ty_name + "/" + info.source;
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -52,7 +52,7 @@ public:
|
||||||
*
|
*
|
||||||
* This method must only be called from the main thread.
|
* This method must only be called from the main thread.
|
||||||
*/
|
*/
|
||||||
void Init(string arg_source, ReaderMode mode, const int arg_num_fields, const threading::Field* const* fields);
|
void Init(const ReaderBackend::ReaderInfo& info, ReaderMode mode, const int arg_num_fields, const threading::Field* const* fields);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Force an update of the current input source. Actual action depends
|
* Force an update of the current input source. Actual action depends
|
||||||
|
@ -102,13 +102,23 @@ public:
|
||||||
*/
|
*/
|
||||||
string Name() const;
|
string Name() const;
|
||||||
|
|
||||||
protected:
|
/**
|
||||||
friend class Manager;
|
* Returns the additional reader information into the constructor.
|
||||||
|
*/
|
||||||
|
const ReaderBackend::ReaderInfo& Info() const { return info; }
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the source as passed into the constructor.
|
* Returns the number of log fields as passed into the constructor.
|
||||||
*/
|
*/
|
||||||
const string& Source() const { return source; };
|
int NumFields() const { return num_fields; }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the log fields as passed into the constructor.
|
||||||
|
*/
|
||||||
|
const threading::Field* const * Fields() const { return fields; }
|
||||||
|
|
||||||
|
protected:
|
||||||
|
friend class Manager;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the name of the backend's type.
|
* Returns the name of the backend's type.
|
||||||
|
@ -117,7 +127,9 @@ protected:
|
||||||
|
|
||||||
private:
|
private:
|
||||||
ReaderBackend* backend; // The backend we have instanatiated.
|
ReaderBackend* backend; // The backend we have instanatiated.
|
||||||
string source;
|
ReaderBackend::ReaderInfo info; // Meta information as passed to Init().
|
||||||
|
const threading::Field* const* fields; // The log fields.
|
||||||
|
int num_fields; // Information as passed to init();
|
||||||
string ty_name; // Backend type, set by manager.
|
string ty_name; // Backend type, set by manager.
|
||||||
bool disabled; // True if disabled.
|
bool disabled; // True if disabled.
|
||||||
bool initialized; // True if initialized.
|
bool initialized; // True if initialized.
|
||||||
|
|
|
@ -83,14 +83,14 @@ void Ascii::DoClose()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Ascii::DoInit(string path, ReaderMode mode, int num_fields, const Field* const* fields)
|
bool Ascii::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const Field* const* fields)
|
||||||
{
|
{
|
||||||
mtime = 0;
|
mtime = 0;
|
||||||
|
|
||||||
file = new ifstream(path.c_str());
|
file = new ifstream(info.source.c_str());
|
||||||
if ( ! file->is_open() )
|
if ( ! file->is_open() )
|
||||||
{
|
{
|
||||||
Error(Fmt("Init: cannot open %s", path.c_str()));
|
Error(Fmt("Init: cannot open %s", info.source.c_str()));
|
||||||
delete(file);
|
delete(file);
|
||||||
file = 0;
|
file = 0;
|
||||||
return false;
|
return false;
|
||||||
|
@ -98,7 +98,7 @@ bool Ascii::DoInit(string path, ReaderMode mode, int num_fields, const Field* co
|
||||||
|
|
||||||
if ( ReadHeader(false) == false )
|
if ( ReadHeader(false) == false )
|
||||||
{
|
{
|
||||||
Error(Fmt("Init: cannot open %s; headers are incorrect", path.c_str()));
|
Error(Fmt("Init: cannot open %s; headers are incorrect", info.source.c_str()));
|
||||||
file->close();
|
file->close();
|
||||||
delete(file);
|
delete(file);
|
||||||
file = 0;
|
file = 0;
|
||||||
|
@ -147,7 +147,7 @@ bool Ascii::ReadHeader(bool useCached)
|
||||||
//printf("Updating fields from description %s\n", line.c_str());
|
//printf("Updating fields from description %s\n", line.c_str());
|
||||||
columnMap.clear();
|
columnMap.clear();
|
||||||
|
|
||||||
for ( unsigned int i = 0; i < NumFields(); i++ )
|
for ( int i = 0; i < NumFields(); i++ )
|
||||||
{
|
{
|
||||||
const Field* field = Fields()[i];
|
const Field* field = Fields()[i];
|
||||||
|
|
||||||
|
@ -164,7 +164,7 @@ bool Ascii::ReadHeader(bool useCached)
|
||||||
}
|
}
|
||||||
|
|
||||||
Error(Fmt("Did not find requested field %s in input data file %s.",
|
Error(Fmt("Did not find requested field %s in input data file %s.",
|
||||||
field->name.c_str(), Source().c_str()));
|
field->name.c_str(), Info().source.c_str()));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -367,9 +367,9 @@ bool Ascii::DoUpdate()
|
||||||
{
|
{
|
||||||
// check if the file has changed
|
// check if the file has changed
|
||||||
struct stat sb;
|
struct stat sb;
|
||||||
if ( stat(Source().c_str(), &sb) == -1 )
|
if ( stat(Info().source.c_str(), &sb) == -1 )
|
||||||
{
|
{
|
||||||
Error(Fmt("Could not get stat for %s", Source().c_str()));
|
Error(Fmt("Could not get stat for %s", Info().source.c_str()));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -403,10 +403,10 @@ bool Ascii::DoUpdate()
|
||||||
file = 0;
|
file = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
file = new ifstream(Source().c_str());
|
file = new ifstream(Info().source.c_str());
|
||||||
if ( ! file->is_open() )
|
if ( ! file->is_open() )
|
||||||
{
|
{
|
||||||
Error(Fmt("cannot open %s", Source().c_str()));
|
Error(Fmt("cannot open %s", Info().source.c_str()));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -490,7 +490,7 @@ bool Ascii::DoUpdate()
|
||||||
}
|
}
|
||||||
|
|
||||||
//printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields);
|
//printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields);
|
||||||
assert ( (unsigned int) fpos == NumFields() );
|
assert ( fpos == NumFields() );
|
||||||
|
|
||||||
if ( Mode() == MODE_STREAM )
|
if ( Mode() == MODE_STREAM )
|
||||||
Put(fields);
|
Put(fields);
|
||||||
|
|
|
@ -38,7 +38,7 @@ public:
|
||||||
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); }
|
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields);
|
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields);
|
||||||
virtual void DoClose();
|
virtual void DoClose();
|
||||||
virtual bool DoUpdate();
|
virtual bool DoUpdate();
|
||||||
virtual bool DoHeartbeat(double network_time, double current_time);
|
virtual bool DoHeartbeat(double network_time, double current_time);
|
||||||
|
|
|
@ -36,9 +36,9 @@ void Benchmark::DoClose()
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Benchmark::DoInit(string path, ReaderMode mode, int num_fields, const Field* const* fields)
|
bool Benchmark::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const Field* const* fields)
|
||||||
{
|
{
|
||||||
num_lines = atoi(path.c_str());
|
num_lines = atoi(info.source.c_str());
|
||||||
|
|
||||||
if ( autospread != 0.0 )
|
if ( autospread != 0.0 )
|
||||||
autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) );
|
autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) );
|
||||||
|
@ -80,7 +80,7 @@ bool Benchmark::DoUpdate()
|
||||||
for ( int i = 0; i < linestosend; i++ )
|
for ( int i = 0; i < linestosend; i++ )
|
||||||
{
|
{
|
||||||
Value** field = new Value*[NumFields()];
|
Value** field = new Value*[NumFields()];
|
||||||
for (unsigned int j = 0; j < NumFields(); j++ )
|
for (int j = 0; j < NumFields(); j++ )
|
||||||
field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype);
|
field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype);
|
||||||
|
|
||||||
if ( Mode() == MODE_STREAM )
|
if ( Mode() == MODE_STREAM )
|
||||||
|
|
|
@ -18,7 +18,7 @@ public:
|
||||||
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); }
|
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields);
|
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields);
|
||||||
virtual void DoClose();
|
virtual void DoClose();
|
||||||
virtual bool DoUpdate();
|
virtual bool DoUpdate();
|
||||||
virtual bool DoHeartbeat(double network_time, double current_time);
|
virtual bool DoHeartbeat(double network_time, double current_time);
|
||||||
|
|
|
@ -79,6 +79,9 @@ bool Raw::CloseInput()
|
||||||
InternalError(Fmt("Trying to close closed file for stream %s", fname.c_str()));
|
InternalError(Fmt("Trying to close closed file for stream %s", fname.c_str()));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
#ifdef DEBUG
|
||||||
|
Debug(DBG_INPUT, "Raw reader starting close");
|
||||||
|
#endif
|
||||||
|
|
||||||
delete in;
|
delete in;
|
||||||
|
|
||||||
|
@ -90,18 +93,22 @@ bool Raw::CloseInput()
|
||||||
in = NULL;
|
in = NULL;
|
||||||
file = NULL;
|
file = NULL;
|
||||||
|
|
||||||
|
#ifdef DEBUG
|
||||||
|
Debug(DBG_INPUT, "Raw reader finished close");
|
||||||
|
#endif
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Raw::DoInit(string path, ReaderMode mode, int num_fields, const Field* const* fields)
|
bool Raw::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const Field* const* fields)
|
||||||
{
|
{
|
||||||
fname = path;
|
fname = info.source;
|
||||||
mtime = 0;
|
mtime = 0;
|
||||||
execute = false;
|
execute = false;
|
||||||
firstrun = true;
|
firstrun = true;
|
||||||
bool result;
|
bool result;
|
||||||
|
|
||||||
if ( path.length() == 0 )
|
if ( info.source.length() == 0 )
|
||||||
{
|
{
|
||||||
Error("No source path provided");
|
Error("No source path provided");
|
||||||
return false;
|
return false;
|
||||||
|
@ -122,13 +129,13 @@ bool Raw::DoInit(string path, ReaderMode mode, int num_fields, const Field* cons
|
||||||
}
|
}
|
||||||
|
|
||||||
// do Initialization
|
// do Initialization
|
||||||
char last = path[path.length()-1];
|
char last = info.source[info.source.length()-1];
|
||||||
if ( last == '|' )
|
if ( last == '|' )
|
||||||
{
|
{
|
||||||
execute = true;
|
execute = true;
|
||||||
fname = path.substr(0, fname.length() - 1);
|
fname = info.source.substr(0, fname.length() - 1);
|
||||||
|
|
||||||
if ( (mode != MODE_MANUAL) && (mode != MODE_STREAM) )
|
if ( (mode != MODE_MANUAL) )
|
||||||
{
|
{
|
||||||
Error(Fmt("Unsupported read mode %d for source %s in execution mode",
|
Error(Fmt("Unsupported read mode %d for source %s in execution mode",
|
||||||
mode, fname.c_str()));
|
mode, fname.c_str()));
|
||||||
|
@ -254,8 +261,14 @@ bool Raw::DoHeartbeat(double network_time, double current_time)
|
||||||
|
|
||||||
case MODE_REREAD:
|
case MODE_REREAD:
|
||||||
case MODE_STREAM:
|
case MODE_STREAM:
|
||||||
|
#ifdef DEBUG
|
||||||
|
Debug(DBG_INPUT, "Starting Heartbeat update");
|
||||||
|
#endif
|
||||||
Update(); // call update and not DoUpdate, because update
|
Update(); // call update and not DoUpdate, because update
|
||||||
// checks disabled.
|
// checks disabled.
|
||||||
|
#ifdef DEBUG
|
||||||
|
Debug(DBG_INPUT, "Finished with heartbeat update");
|
||||||
|
#endif
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
assert(false);
|
assert(false);
|
||||||
|
|
|
@ -22,7 +22,7 @@ public:
|
||||||
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); }
|
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
virtual bool DoInit(string path, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields);
|
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields);
|
||||||
virtual void DoClose();
|
virtual void DoClose();
|
||||||
virtual bool DoUpdate();
|
virtual bool DoUpdate();
|
||||||
virtual bool DoHeartbeat(double network_time, double current_time);
|
virtual bool DoHeartbeat(double network_time, double current_time);
|
||||||
|
|
|
@ -86,3 +86,8 @@ module LogSQLite;
|
||||||
|
|
||||||
const set_separator: string;
|
const set_separator: string;
|
||||||
|
|
||||||
|
# Options for the None writer.
|
||||||
|
|
||||||
|
module LogNone;
|
||||||
|
|
||||||
|
const debug: bool;
|
||||||
|
|
|
@ -60,6 +60,7 @@ struct Manager::Filter {
|
||||||
string path;
|
string path;
|
||||||
Val* path_val;
|
Val* path_val;
|
||||||
EnumVal* writer;
|
EnumVal* writer;
|
||||||
|
TableVal* config;
|
||||||
bool local;
|
bool local;
|
||||||
bool remote;
|
bool remote;
|
||||||
double interval;
|
double interval;
|
||||||
|
@ -83,6 +84,7 @@ struct Manager::WriterInfo {
|
||||||
double interval;
|
double interval;
|
||||||
Func* postprocessor;
|
Func* postprocessor;
|
||||||
WriterFrontend* writer;
|
WriterFrontend* writer;
|
||||||
|
WriterBackend::WriterInfo info;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct Manager::Stream {
|
struct Manager::Stream {
|
||||||
|
@ -527,6 +529,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
|
||||||
Val* log_remote = fval->LookupWithDefault(rtype->FieldOffset("log_remote"));
|
Val* log_remote = fval->LookupWithDefault(rtype->FieldOffset("log_remote"));
|
||||||
Val* interv = fval->LookupWithDefault(rtype->FieldOffset("interv"));
|
Val* interv = fval->LookupWithDefault(rtype->FieldOffset("interv"));
|
||||||
Val* postprocessor = fval->LookupWithDefault(rtype->FieldOffset("postprocessor"));
|
Val* postprocessor = fval->LookupWithDefault(rtype->FieldOffset("postprocessor"));
|
||||||
|
Val* config = fval->LookupWithDefault(rtype->FieldOffset("config"));
|
||||||
|
|
||||||
Filter* filter = new Filter;
|
Filter* filter = new Filter;
|
||||||
filter->name = name->AsString()->CheckString();
|
filter->name = name->AsString()->CheckString();
|
||||||
|
@ -538,6 +541,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
|
||||||
filter->remote = log_remote->AsBool();
|
filter->remote = log_remote->AsBool();
|
||||||
filter->interval = interv->AsInterval();
|
filter->interval = interv->AsInterval();
|
||||||
filter->postprocessor = postprocessor ? postprocessor->AsFunc() : 0;
|
filter->postprocessor = postprocessor ? postprocessor->AsFunc() : 0;
|
||||||
|
filter->config = config->Ref()->AsTableVal();
|
||||||
|
|
||||||
Unref(name);
|
Unref(name);
|
||||||
Unref(pred);
|
Unref(pred);
|
||||||
|
@ -546,6 +550,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
|
||||||
Unref(log_remote);
|
Unref(log_remote);
|
||||||
Unref(interv);
|
Unref(interv);
|
||||||
Unref(postprocessor);
|
Unref(postprocessor);
|
||||||
|
Unref(config);
|
||||||
|
|
||||||
// Build the list of fields that the filter wants included, including
|
// Build the list of fields that the filter wants included, including
|
||||||
// potentially rolling out fields.
|
// potentially rolling out fields.
|
||||||
|
@ -773,8 +778,27 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
|
||||||
for ( int j = 0; j < filter->num_fields; ++j )
|
for ( int j = 0; j < filter->num_fields; ++j )
|
||||||
arg_fields[j] = new threading::Field(*filter->fields[j]);
|
arg_fields[j] = new threading::Field(*filter->fields[j]);
|
||||||
|
|
||||||
|
WriterBackend::WriterInfo info;
|
||||||
|
info.path = path;
|
||||||
|
|
||||||
|
HashKey* k;
|
||||||
|
IterCookie* c = filter->config->AsTable()->InitForIteration();
|
||||||
|
|
||||||
|
TableEntryVal* v;
|
||||||
|
while ( (v = filter->config->AsTable()->NextEntry(k, c)) )
|
||||||
|
{
|
||||||
|
ListVal* index = filter->config->RecoverIndex(k);
|
||||||
|
string key = index->Index(0)->AsString()->CheckString();
|
||||||
|
string value = v->Value()->AsString()->CheckString();
|
||||||
|
info.config.insert(std::make_pair(key, value));
|
||||||
|
Unref(index);
|
||||||
|
delete k;
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateWriter() will set the other fields in info.
|
||||||
|
|
||||||
writer = CreateWriter(stream->id, filter->writer,
|
writer = CreateWriter(stream->id, filter->writer,
|
||||||
path, filter->num_fields,
|
info, filter->num_fields,
|
||||||
arg_fields, filter->local, filter->remote);
|
arg_fields, filter->local, filter->remote);
|
||||||
|
|
||||||
if ( ! writer )
|
if ( ! writer )
|
||||||
|
@ -782,7 +806,6 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
|
||||||
Unref(columns);
|
Unref(columns);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Alright, can do the write now.
|
// Alright, can do the write now.
|
||||||
|
@ -962,7 +985,7 @@ threading::Value** Manager::RecordToFilterVals(Stream* stream, Filter* filter,
|
||||||
return vals;
|
return vals;
|
||||||
}
|
}
|
||||||
|
|
||||||
WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, const WriterBackend::WriterInfo& info,
|
||||||
int num_fields, const threading::Field* const* fields, bool local, bool remote)
|
int num_fields, const threading::Field* const* fields, bool local, bool remote)
|
||||||
{
|
{
|
||||||
Stream* stream = FindStream(id);
|
Stream* stream = FindStream(id);
|
||||||
|
@ -972,7 +995,7 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
Stream::WriterMap::iterator w =
|
Stream::WriterMap::iterator w =
|
||||||
stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), path));
|
stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), info.path));
|
||||||
|
|
||||||
if ( w != stream->writers.end() )
|
if ( w != stream->writers.end() )
|
||||||
// If we already have a writer for this. That's fine, we just
|
// If we already have a writer for this. That's fine, we just
|
||||||
|
@ -982,8 +1005,6 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
||||||
WriterFrontend* writer_obj = new WriterFrontend(id, writer, local, remote);
|
WriterFrontend* writer_obj = new WriterFrontend(id, writer, local, remote);
|
||||||
assert(writer_obj);
|
assert(writer_obj);
|
||||||
|
|
||||||
writer_obj->Init(path, num_fields, fields);
|
|
||||||
|
|
||||||
WriterInfo* winfo = new WriterInfo;
|
WriterInfo* winfo = new WriterInfo;
|
||||||
winfo->type = writer->Ref()->AsEnumVal();
|
winfo->type = writer->Ref()->AsEnumVal();
|
||||||
winfo->writer = writer_obj;
|
winfo->writer = writer_obj;
|
||||||
|
@ -991,6 +1012,7 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
||||||
winfo->rotation_timer = 0;
|
winfo->rotation_timer = 0;
|
||||||
winfo->interval = 0;
|
winfo->interval = 0;
|
||||||
winfo->postprocessor = 0;
|
winfo->postprocessor = 0;
|
||||||
|
winfo->info = info;
|
||||||
|
|
||||||
// Search for a corresponding filter for the writer/path pair and use its
|
// Search for a corresponding filter for the writer/path pair and use its
|
||||||
// rotation settings. If no matching filter is found, fall back on
|
// rotation settings. If no matching filter is found, fall back on
|
||||||
|
@ -1002,7 +1024,7 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
||||||
{
|
{
|
||||||
Filter* f = *it;
|
Filter* f = *it;
|
||||||
if ( f->writer->AsEnum() == writer->AsEnum() &&
|
if ( f->writer->AsEnum() == writer->AsEnum() &&
|
||||||
f->path == winfo->writer->Path() )
|
f->path == winfo->writer->info.path )
|
||||||
{
|
{
|
||||||
found_filter_match = true;
|
found_filter_match = true;
|
||||||
winfo->interval = f->interval;
|
winfo->interval = f->interval;
|
||||||
|
@ -1021,9 +1043,19 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
||||||
InstallRotationTimer(winfo);
|
InstallRotationTimer(winfo);
|
||||||
|
|
||||||
stream->writers.insert(
|
stream->writers.insert(
|
||||||
Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), path),
|
Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), info.path),
|
||||||
winfo));
|
winfo));
|
||||||
|
|
||||||
|
// Still need to set the WriterInfo's rotation parameters, which we
|
||||||
|
// computed above.
|
||||||
|
const char* base_time = log_rotate_base_time ?
|
||||||
|
log_rotate_base_time->AsString()->CheckString() : 0;
|
||||||
|
|
||||||
|
winfo->info.rotation_interval = winfo->interval;
|
||||||
|
winfo->info.rotation_base = parse_rotate_base_time(base_time);
|
||||||
|
|
||||||
|
writer_obj->Init(winfo->info, num_fields, fields);
|
||||||
|
|
||||||
return writer_obj;
|
return writer_obj;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1102,7 +1134,7 @@ void Manager::SendAllWritersTo(RemoteSerializer::PeerID peer)
|
||||||
EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer);
|
EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer);
|
||||||
remote_serializer->SendLogCreateWriter(peer, (*s)->id,
|
remote_serializer->SendLogCreateWriter(peer, (*s)->id,
|
||||||
&writer_val,
|
&writer_val,
|
||||||
i->first.second,
|
i->second->info,
|
||||||
writer->NumFields(),
|
writer->NumFields(),
|
||||||
writer->Fields());
|
writer->Fields());
|
||||||
}
|
}
|
||||||
|
@ -1227,8 +1259,9 @@ void Manager::InstallRotationTimer(WriterInfo* winfo)
|
||||||
const char* base_time = log_rotate_base_time ?
|
const char* base_time = log_rotate_base_time ?
|
||||||
log_rotate_base_time->AsString()->CheckString() : 0;
|
log_rotate_base_time->AsString()->CheckString() : 0;
|
||||||
|
|
||||||
|
double base = parse_rotate_base_time(base_time);
|
||||||
double delta_t =
|
double delta_t =
|
||||||
calc_next_rotate(rotation_interval, base_time);
|
calc_next_rotate(network_time, rotation_interval, base);
|
||||||
|
|
||||||
winfo->rotation_timer =
|
winfo->rotation_timer =
|
||||||
new RotationTimer(network_time + delta_t, winfo, true);
|
new RotationTimer(network_time + delta_t, winfo, true);
|
||||||
|
@ -1237,14 +1270,14 @@ void Manager::InstallRotationTimer(WriterInfo* winfo)
|
||||||
timer_mgr->Add(winfo->rotation_timer);
|
timer_mgr->Add(winfo->rotation_timer);
|
||||||
|
|
||||||
DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f",
|
DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f",
|
||||||
winfo->writer->Path().c_str(), winfo->rotation_timer->Time());
|
winfo->writer->Name().c_str(), winfo->rotation_timer->Time());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void Manager::Rotate(WriterInfo* winfo)
|
void Manager::Rotate(WriterInfo* winfo)
|
||||||
{
|
{
|
||||||
DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f",
|
DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f",
|
||||||
winfo->writer->Path().c_str(), network_time);
|
winfo->writer->Name().c_str(), network_time);
|
||||||
|
|
||||||
// Build a temporary path for the writer to move the file to.
|
// Build a temporary path for the writer to move the file to.
|
||||||
struct tm tm;
|
struct tm tm;
|
||||||
|
@ -1255,7 +1288,7 @@ void Manager::Rotate(WriterInfo* winfo)
|
||||||
localtime_r(&teatime, &tm);
|
localtime_r(&teatime, &tm);
|
||||||
strftime(buf, sizeof(buf), date_fmt, &tm);
|
strftime(buf, sizeof(buf), date_fmt, &tm);
|
||||||
|
|
||||||
string tmp = string(fmt("%s-%s", winfo->writer->Path().c_str(), buf));
|
string tmp = string(fmt("%s-%s", winfo->writer->Info().path.c_str(), buf));
|
||||||
|
|
||||||
// Trigger the rotation.
|
// Trigger the rotation.
|
||||||
winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating);
|
winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating);
|
||||||
|
@ -1273,7 +1306,7 @@ bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string o
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s",
|
DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s",
|
||||||
writer->Path().c_str(), network_time, new_name.c_str());
|
writer->Name().c_str(), network_time, new_name.c_str());
|
||||||
|
|
||||||
WriterInfo* winfo = FindWriter(writer);
|
WriterInfo* winfo = FindWriter(writer);
|
||||||
if ( ! winfo )
|
if ( ! winfo )
|
||||||
|
@ -1283,7 +1316,7 @@ bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string o
|
||||||
RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo);
|
RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo);
|
||||||
info->Assign(0, winfo->type->Ref());
|
info->Assign(0, winfo->type->Ref());
|
||||||
info->Assign(1, new StringVal(new_name.c_str()));
|
info->Assign(1, new StringVal(new_name.c_str()));
|
||||||
info->Assign(2, new StringVal(winfo->writer->Path().c_str()));
|
info->Assign(2, new StringVal(winfo->writer->Info().path.c_str()));
|
||||||
info->Assign(3, new Val(open, TYPE_TIME));
|
info->Assign(3, new Val(open, TYPE_TIME));
|
||||||
info->Assign(4, new Val(close, TYPE_TIME));
|
info->Assign(4, new Val(close, TYPE_TIME));
|
||||||
info->Assign(5, new Val(terminating, TYPE_BOOL));
|
info->Assign(5, new Val(terminating, TYPE_BOOL));
|
||||||
|
|
|
@ -9,13 +9,14 @@
|
||||||
#include "../EventHandler.h"
|
#include "../EventHandler.h"
|
||||||
#include "../RemoteSerializer.h"
|
#include "../RemoteSerializer.h"
|
||||||
|
|
||||||
|
#include "WriterBackend.h"
|
||||||
|
|
||||||
class SerializationFormat;
|
class SerializationFormat;
|
||||||
class RemoteSerializer;
|
class RemoteSerializer;
|
||||||
class RotationTimer;
|
class RotationTimer;
|
||||||
|
|
||||||
namespace logging {
|
namespace logging {
|
||||||
|
|
||||||
class WriterBackend;
|
|
||||||
class WriterFrontend;
|
class WriterFrontend;
|
||||||
class RotationFinishedMessage;
|
class RotationFinishedMessage;
|
||||||
|
|
||||||
|
@ -162,7 +163,7 @@ protected:
|
||||||
//// Function also used by the RemoteSerializer.
|
//// Function also used by the RemoteSerializer.
|
||||||
|
|
||||||
// Takes ownership of fields.
|
// Takes ownership of fields.
|
||||||
WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, string path,
|
WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, const WriterBackend::WriterInfo& info,
|
||||||
int num_fields, const threading::Field* const* fields,
|
int num_fields, const threading::Field* const* fields,
|
||||||
bool local, bool remote);
|
bool local, bool remote);
|
||||||
|
|
||||||
|
|
|
@ -4,6 +4,7 @@
|
||||||
#include "bro_inet_ntop.h"
|
#include "bro_inet_ntop.h"
|
||||||
#include "threading/SerialTypes.h"
|
#include "threading/SerialTypes.h"
|
||||||
|
|
||||||
|
#include "Manager.h"
|
||||||
#include "WriterBackend.h"
|
#include "WriterBackend.h"
|
||||||
#include "WriterFrontend.h"
|
#include "WriterFrontend.h"
|
||||||
|
|
||||||
|
@ -60,14 +61,61 @@ public:
|
||||||
|
|
||||||
using namespace logging;
|
using namespace logging;
|
||||||
|
|
||||||
|
bool WriterBackend::WriterInfo::Read(SerializationFormat* fmt)
|
||||||
|
{
|
||||||
|
int size;
|
||||||
|
|
||||||
|
if ( ! (fmt->Read(&path, "path") &&
|
||||||
|
fmt->Read(&rotation_base, "rotation_base") &&
|
||||||
|
fmt->Read(&rotation_interval, "rotation_interval") &&
|
||||||
|
fmt->Read(&size, "config_size")) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
config.clear();
|
||||||
|
|
||||||
|
while ( size )
|
||||||
|
{
|
||||||
|
string value;
|
||||||
|
string key;
|
||||||
|
|
||||||
|
if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
config.insert(std::make_pair(value, key));
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
bool WriterBackend::WriterInfo::Write(SerializationFormat* fmt) const
|
||||||
|
{
|
||||||
|
int size = config.size();
|
||||||
|
|
||||||
|
if ( ! (fmt->Write(path, "path") &&
|
||||||
|
fmt->Write(rotation_base, "rotation_base") &&
|
||||||
|
fmt->Write(rotation_interval, "rotation_interval") &&
|
||||||
|
fmt->Write(size, "config_size")) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for ( config_map::const_iterator i = config.begin(); i != config.end(); ++i )
|
||||||
|
{
|
||||||
|
if ( ! (fmt->Write(i->first, "config-value") && fmt->Write(i->second, "config-key")) )
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread()
|
WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread()
|
||||||
{
|
{
|
||||||
path = "<path not yet set>";
|
|
||||||
num_fields = 0;
|
num_fields = 0;
|
||||||
fields = 0;
|
fields = 0;
|
||||||
buffering = true;
|
buffering = true;
|
||||||
frontend = arg_frontend;
|
frontend = arg_frontend;
|
||||||
|
|
||||||
|
info.path = "<path not yet set>";
|
||||||
|
|
||||||
SetName(frontend->Name());
|
SetName(frontend->Name());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -108,17 +156,17 @@ void WriterBackend::DisableFrontend()
|
||||||
SendOut(new DisableMessage(frontend));
|
SendOut(new DisableMessage(frontend));
|
||||||
}
|
}
|
||||||
|
|
||||||
bool WriterBackend::Init(string arg_path, int arg_num_fields, const Field* const* arg_fields)
|
bool WriterBackend::Init(const WriterInfo& arg_info, int arg_num_fields, const Field* const* arg_fields)
|
||||||
{
|
{
|
||||||
path = arg_path;
|
info = arg_info;
|
||||||
num_fields = arg_num_fields;
|
num_fields = arg_num_fields;
|
||||||
fields = arg_fields;
|
fields = arg_fields;
|
||||||
|
|
||||||
string name = Fmt("%s/%s", path.c_str(), frontend->Name().c_str());
|
string name = Fmt("%s/%s", info.path.c_str(), frontend->Name().c_str());
|
||||||
|
|
||||||
SetName(name);
|
SetName(name);
|
||||||
|
|
||||||
if ( ! DoInit(arg_path, arg_num_fields, arg_fields) )
|
if ( ! DoInit(arg_info, arg_num_fields, arg_fields) )
|
||||||
{
|
{
|
||||||
DisableFrontend();
|
DisableFrontend();
|
||||||
return false;
|
return false;
|
||||||
|
|
|
@ -5,12 +5,14 @@
|
||||||
#ifndef LOGGING_WRITERBACKEND_H
|
#ifndef LOGGING_WRITERBACKEND_H
|
||||||
#define LOGGING_WRITERBACKEND_H
|
#define LOGGING_WRITERBACKEND_H
|
||||||
|
|
||||||
#include "Manager.h"
|
|
||||||
|
|
||||||
#include "threading/MsgThread.h"
|
#include "threading/MsgThread.h"
|
||||||
|
|
||||||
|
class RemoteSerializer;
|
||||||
|
|
||||||
namespace logging {
|
namespace logging {
|
||||||
|
|
||||||
|
class WriterFrontend;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Base class for writer implementation. When the logging::Manager creates a
|
* Base class for writer implementation. When the logging::Manager creates a
|
||||||
* new logging filter, it instantiates a WriterFrontend. That then in turn
|
* new logging filter, it instantiates a WriterFrontend. That then in turn
|
||||||
|
@ -41,21 +43,57 @@ public:
|
||||||
*/
|
*/
|
||||||
virtual ~WriterBackend();
|
virtual ~WriterBackend();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A struct passing information to the writer at initialization time.
|
||||||
|
*/
|
||||||
|
struct WriterInfo
|
||||||
|
{
|
||||||
|
typedef std::map<string, string> config_map;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A string left to the interpretation of the writer
|
||||||
|
* implementation; it corresponds to the value configured on
|
||||||
|
* the script-level for the logging filter.
|
||||||
|
*/
|
||||||
|
string path;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* The rotation interval as configured for this writer.
|
||||||
|
*/
|
||||||
|
double rotation_interval;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* The parsed value of log_rotate_base_time in seconds.
|
||||||
|
*/
|
||||||
|
double rotation_base;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A map of key/value pairs corresponding to the relevant
|
||||||
|
* filter's "config" table.
|
||||||
|
*/
|
||||||
|
std::map<string, string> config;
|
||||||
|
|
||||||
|
private:
|
||||||
|
friend class ::RemoteSerializer;
|
||||||
|
|
||||||
|
// Note, these need to be adapted when changing the struct's
|
||||||
|
// fields. They serialize/deserialize the struct.
|
||||||
|
bool Read(SerializationFormat* fmt);
|
||||||
|
bool Write(SerializationFormat* fmt) const;
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* One-time initialization of the writer to define the logged fields.
|
* One-time initialization of the writer to define the logged fields.
|
||||||
*
|
*
|
||||||
* @param path A string left to the interpretation of the writer
|
* @param info Meta information for the writer.
|
||||||
* implementation; it corresponds to the value configured on the
|
* @param num_fields
|
||||||
* script-level for the logging filter.
|
|
||||||
*
|
|
||||||
* @param num_fields The number of log fields for the stream.
|
|
||||||
*
|
*
|
||||||
* @param fields An array of size \a num_fields with the log fields.
|
* @param fields An array of size \a num_fields with the log fields.
|
||||||
* The methods takes ownership of the array.
|
* The methods takes ownership of the array.
|
||||||
*
|
*
|
||||||
* @return False if an error occured.
|
* @return False if an error occured.
|
||||||
*/
|
*/
|
||||||
bool Init(string path, int num_fields, const threading::Field* const* fields);
|
bool Init(const WriterInfo& info, int num_fields, const threading::Field* const* fields);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Writes one log entry.
|
* Writes one log entry.
|
||||||
|
@ -108,9 +146,9 @@ public:
|
||||||
void DisableFrontend();
|
void DisableFrontend();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the log path as passed into the constructor.
|
* Returns the additional writer information into the constructor.
|
||||||
*/
|
*/
|
||||||
const string Path() const { return path; }
|
const WriterInfo& Info() const { return info; }
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the number of log fields as passed into the constructor.
|
* Returns the number of log fields as passed into the constructor.
|
||||||
|
@ -185,7 +223,7 @@ protected:
|
||||||
* disabled and eventually deleted. When returning false, an
|
* disabled and eventually deleted. When returning false, an
|
||||||
* implementation should also call Error() to indicate what happened.
|
* implementation should also call Error() to indicate what happened.
|
||||||
*/
|
*/
|
||||||
virtual bool DoInit(string path, int num_fields,
|
virtual bool DoInit(const WriterInfo& info, int num_fields,
|
||||||
const threading::Field* const* fields) = 0;
|
const threading::Field* const* fields) = 0;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -299,7 +337,7 @@ private:
|
||||||
// this class, it's running in a different thread!
|
// this class, it's running in a different thread!
|
||||||
WriterFrontend* frontend;
|
WriterFrontend* frontend;
|
||||||
|
|
||||||
string path; // Log path.
|
WriterInfo info; // Meta information as passed to Init().
|
||||||
int num_fields; // Number of log fields.
|
int num_fields; // Number of log fields.
|
||||||
const threading::Field* const* fields; // Log fields.
|
const threading::Field* const* fields; // Log fields.
|
||||||
bool buffering; // True if buffering is enabled.
|
bool buffering; // True if buffering is enabled.
|
||||||
|
|
|
@ -2,6 +2,7 @@
|
||||||
#include "Net.h"
|
#include "Net.h"
|
||||||
#include "threading/SerialTypes.h"
|
#include "threading/SerialTypes.h"
|
||||||
|
|
||||||
|
#include "Manager.h"
|
||||||
#include "WriterFrontend.h"
|
#include "WriterFrontend.h"
|
||||||
#include "WriterBackend.h"
|
#include "WriterBackend.h"
|
||||||
|
|
||||||
|
@ -15,14 +16,14 @@ namespace logging {
|
||||||
class InitMessage : public threading::InputMessage<WriterBackend>
|
class InitMessage : public threading::InputMessage<WriterBackend>
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
InitMessage(WriterBackend* backend, const string path, const int num_fields, const Field* const* fields)
|
InitMessage(WriterBackend* backend, const WriterBackend::WriterInfo& info, const int num_fields, const Field* const* fields)
|
||||||
: threading::InputMessage<WriterBackend>("Init", backend),
|
: threading::InputMessage<WriterBackend>("Init", backend),
|
||||||
path(path), num_fields(num_fields), fields(fields) { }
|
info(info), num_fields(num_fields), fields(fields) { }
|
||||||
|
|
||||||
virtual bool Process() { return Object()->Init(path, num_fields, fields); }
|
virtual bool Process() { return Object()->Init(info, num_fields, fields); }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
const string path;
|
WriterBackend::WriterInfo info;
|
||||||
const int num_fields;
|
const int num_fields;
|
||||||
const Field * const* fields;
|
const Field * const* fields;
|
||||||
};
|
};
|
||||||
|
@ -134,10 +135,10 @@ WriterFrontend::~WriterFrontend()
|
||||||
|
|
||||||
string WriterFrontend::Name() const
|
string WriterFrontend::Name() const
|
||||||
{
|
{
|
||||||
if ( path.size() )
|
if ( info.path.size() )
|
||||||
return ty_name;
|
return ty_name;
|
||||||
|
|
||||||
return ty_name + "/" + path;
|
return ty_name + "/" + info.path;
|
||||||
}
|
}
|
||||||
|
|
||||||
void WriterFrontend::Stop()
|
void WriterFrontend::Stop()
|
||||||
|
@ -149,7 +150,7 @@ void WriterFrontend::Stop()
|
||||||
backend->Stop();
|
backend->Stop();
|
||||||
}
|
}
|
||||||
|
|
||||||
void WriterFrontend::Init(string arg_path, int arg_num_fields, const Field* const * arg_fields)
|
void WriterFrontend::Init(const WriterBackend::WriterInfo& arg_info, int arg_num_fields, const Field* const * arg_fields)
|
||||||
{
|
{
|
||||||
if ( disabled )
|
if ( disabled )
|
||||||
return;
|
return;
|
||||||
|
@ -157,19 +158,19 @@ void WriterFrontend::Init(string arg_path, int arg_num_fields, const Field* cons
|
||||||
if ( initialized )
|
if ( initialized )
|
||||||
reporter->InternalError("writer initialize twice");
|
reporter->InternalError("writer initialize twice");
|
||||||
|
|
||||||
path = arg_path;
|
info = arg_info;
|
||||||
num_fields = arg_num_fields;
|
num_fields = arg_num_fields;
|
||||||
fields = arg_fields;
|
fields = arg_fields;
|
||||||
|
|
||||||
initialized = true;
|
initialized = true;
|
||||||
|
|
||||||
if ( backend )
|
if ( backend )
|
||||||
backend->SendIn(new InitMessage(backend, arg_path, arg_num_fields, arg_fields));
|
backend->SendIn(new InitMessage(backend, arg_info, arg_num_fields, arg_fields));
|
||||||
|
|
||||||
if ( remote )
|
if ( remote )
|
||||||
remote_serializer->SendLogCreateWriter(stream,
|
remote_serializer->SendLogCreateWriter(stream,
|
||||||
writer,
|
writer,
|
||||||
arg_path,
|
arg_info,
|
||||||
arg_num_fields,
|
arg_num_fields,
|
||||||
arg_fields);
|
arg_fields);
|
||||||
|
|
||||||
|
@ -183,7 +184,7 @@ void WriterFrontend::Write(int num_fields, Value** vals)
|
||||||
if ( remote )
|
if ( remote )
|
||||||
remote_serializer->SendLogWrite(stream,
|
remote_serializer->SendLogWrite(stream,
|
||||||
writer,
|
writer,
|
||||||
path,
|
info.path,
|
||||||
num_fields,
|
num_fields,
|
||||||
vals);
|
vals);
|
||||||
|
|
||||||
|
|
|
@ -3,13 +3,13 @@
|
||||||
#ifndef LOGGING_WRITERFRONTEND_H
|
#ifndef LOGGING_WRITERFRONTEND_H
|
||||||
#define LOGGING_WRITERFRONTEND_H
|
#define LOGGING_WRITERFRONTEND_H
|
||||||
|
|
||||||
#include "Manager.h"
|
#include "WriterBackend.h"
|
||||||
|
|
||||||
#include "threading/MsgThread.h"
|
#include "threading/MsgThread.h"
|
||||||
|
|
||||||
namespace logging {
|
namespace logging {
|
||||||
|
|
||||||
class WriterBackend;
|
class Manager;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Bridge class between the logging::Manager and backend writer threads. The
|
* Bridge class between the logging::Manager and backend writer threads. The
|
||||||
|
@ -68,7 +68,7 @@ public:
|
||||||
*
|
*
|
||||||
* This method must only be called from the main thread.
|
* This method must only be called from the main thread.
|
||||||
*/
|
*/
|
||||||
void Init(string path, int num_fields, const threading::Field* const* fields);
|
void Init(const WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const* fields);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Write out a record.
|
* Write out a record.
|
||||||
|
@ -169,9 +169,9 @@ public:
|
||||||
bool Disabled() { return disabled; }
|
bool Disabled() { return disabled; }
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the log path as passed into the constructor.
|
* Returns the additional writer information as passed into the constructor.
|
||||||
*/
|
*/
|
||||||
const string Path() const { return path; }
|
const WriterBackend::WriterInfo& Info() const { return info; }
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the number of log fields as passed into the constructor.
|
* Returns the number of log fields as passed into the constructor.
|
||||||
|
@ -207,7 +207,7 @@ protected:
|
||||||
bool remote; // True if loggin remotely.
|
bool remote; // True if loggin remotely.
|
||||||
|
|
||||||
string ty_name; // Name of the backend type. Set by the manager.
|
string ty_name; // Name of the backend type. Set by the manager.
|
||||||
string path; // The log path.
|
WriterBackend::WriterInfo info; // The writer information.
|
||||||
int num_fields; // The number of log fields.
|
int num_fields; // The number of log fields.
|
||||||
const threading::Field* const* fields; // The log fields.
|
const threading::Field* const* fields; // The log fields.
|
||||||
|
|
||||||
|
|
|
@ -69,8 +69,10 @@ bool Ascii::WriteHeaderField(const string& key, const string& val)
|
||||||
return (fwrite(str.c_str(), str.length(), 1, file) == 1);
|
return (fwrite(str.c_str(), str.length(), 1, file) == 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Ascii::DoInit(string path, int num_fields, const Field* const * fields)
|
bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const * fields)
|
||||||
{
|
{
|
||||||
|
string path = info.path;
|
||||||
|
|
||||||
if ( output_to_stdout )
|
if ( output_to_stdout )
|
||||||
path = "/dev/stdout";
|
path = "/dev/stdout";
|
||||||
|
|
||||||
|
@ -290,7 +292,7 @@ bool Ascii::DoWrite(int num_fields, const Field* const * fields,
|
||||||
Value** vals)
|
Value** vals)
|
||||||
{
|
{
|
||||||
if ( ! file )
|
if ( ! file )
|
||||||
DoInit(Path(), NumFields(), Fields());
|
DoInit(Info(), NumFields(), Fields());
|
||||||
|
|
||||||
desc.Clear();
|
desc.Clear();
|
||||||
|
|
||||||
|
@ -320,7 +322,7 @@ bool Ascii::DoWrite(int num_fields, const Field* const * fields,
|
||||||
bool Ascii::DoRotate(string rotated_path, double open, double close, bool terminating)
|
bool Ascii::DoRotate(string rotated_path, double open, double close, bool terminating)
|
||||||
{
|
{
|
||||||
// Don't rotate special files or if there's not one currently open.
|
// Don't rotate special files or if there's not one currently open.
|
||||||
if ( ! file || IsSpecial(Path()) )
|
if ( ! file || IsSpecial(Info().path) )
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
fclose(file);
|
fclose(file);
|
||||||
|
|
|
@ -19,7 +19,7 @@ public:
|
||||||
static string LogExt();
|
static string LogExt();
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
virtual bool DoInit(string path, int num_fields,
|
virtual bool DoInit(const WriterInfo& info, int num_fields,
|
||||||
const threading::Field* const* fields);
|
const threading::Field* const* fields);
|
||||||
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
||||||
threading::Value** vals);
|
threading::Value** vals);
|
||||||
|
|
|
@ -263,7 +263,7 @@ bool DataSeries::OpenLog(string path)
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool DataSeries::DoInit(string path, int num_fields, const threading::Field* const * fields)
|
bool DataSeries::DoInit(const WriterInfo& info, int num_fields, const threading::Field* const * fields)
|
||||||
{
|
{
|
||||||
// We first construct an XML schema thing (and, if ds_dump_schema is
|
// We first construct an XML schema thing (and, if ds_dump_schema is
|
||||||
// set, dump it to path + ".ds.xml"). Assuming that goes well, we
|
// set, dump it to path + ".ds.xml"). Assuming that goes well, we
|
||||||
|
@ -298,11 +298,11 @@ bool DataSeries::DoInit(string path, int num_fields, const threading::Field* con
|
||||||
schema_list.push_back(val);
|
schema_list.push_back(val);
|
||||||
}
|
}
|
||||||
|
|
||||||
string schema = BuildDSSchemaFromFieldTypes(schema_list, path);
|
string schema = BuildDSSchemaFromFieldTypes(schema_list, info.path);
|
||||||
|
|
||||||
if( ds_dump_schema )
|
if( ds_dump_schema )
|
||||||
{
|
{
|
||||||
FILE* pFile = fopen ( string(path + ".ds.xml").c_str() , "wb" );
|
FILE* pFile = fopen ( string(info.path + ".ds.xml").c_str() , "wb" );
|
||||||
|
|
||||||
if( pFile )
|
if( pFile )
|
||||||
{
|
{
|
||||||
|
@ -340,7 +340,7 @@ bool DataSeries::DoInit(string path, int num_fields, const threading::Field* con
|
||||||
log_type = log_types.registerTypePtr(schema);
|
log_type = log_types.registerTypePtr(schema);
|
||||||
log_series.setType(log_type);
|
log_series.setType(log_type);
|
||||||
|
|
||||||
return OpenLog(path);
|
return OpenLog(info.path);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool DataSeries::DoFlush()
|
bool DataSeries::DoFlush()
|
||||||
|
@ -401,7 +401,7 @@ bool DataSeries::DoRotate(string rotated_path, double open, double close, bool t
|
||||||
// size will be (much) larger.
|
// size will be (much) larger.
|
||||||
CloseLog();
|
CloseLog();
|
||||||
|
|
||||||
string dsname = Path() + ".ds";
|
string dsname = Info().path + ".ds";
|
||||||
string nname = rotated_path + ".ds";
|
string nname = rotated_path + ".ds";
|
||||||
rename(dsname.c_str(), nname.c_str());
|
rename(dsname.c_str(), nname.c_str());
|
||||||
|
|
||||||
|
@ -411,7 +411,7 @@ bool DataSeries::DoRotate(string rotated_path, double open, double close, bool t
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
return OpenLog(Path());
|
return OpenLog(Info().path);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool DataSeries::DoSetBuf(bool enabled)
|
bool DataSeries::DoSetBuf(bool enabled)
|
||||||
|
|
|
@ -26,7 +26,7 @@ public:
|
||||||
protected:
|
protected:
|
||||||
// Overidden from WriterBackend.
|
// Overidden from WriterBackend.
|
||||||
|
|
||||||
virtual bool DoInit(string path, int num_fields,
|
virtual bool DoInit(const WriterInfo& info, int num_fields,
|
||||||
const threading::Field* const * fields);
|
const threading::Field* const * fields);
|
||||||
|
|
||||||
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
||||||
|
|
|
@ -1,14 +1,41 @@
|
||||||
|
|
||||||
#include "None.h"
|
#include "None.h"
|
||||||
|
#include "NetVar.h"
|
||||||
|
|
||||||
using namespace logging;
|
using namespace logging;
|
||||||
using namespace writer;
|
using namespace writer;
|
||||||
|
|
||||||
|
bool None::DoInit(const WriterInfo& info, int num_fields,
|
||||||
|
const threading::Field* const * fields)
|
||||||
|
{
|
||||||
|
if ( BifConst::LogNone::debug )
|
||||||
|
{
|
||||||
|
std::cout << "[logging::writer::None]" << std::endl;
|
||||||
|
std::cout << " path=" << info.path << std::endl;
|
||||||
|
std::cout << " rotation_interval=" << info.rotation_interval << std::endl;
|
||||||
|
std::cout << " rotation_base=" << info.rotation_base << std::endl;
|
||||||
|
|
||||||
|
for ( std::map<string,string>::const_iterator i = info.config.begin(); i != info.config.end(); i++ )
|
||||||
|
std::cout << " config[" << i->first << "] = " << i->second << std::endl;
|
||||||
|
|
||||||
|
for ( int i = 0; i < num_fields; i++ )
|
||||||
|
{
|
||||||
|
const threading::Field* field = fields[i];
|
||||||
|
std::cout << " field " << field->name << ": "
|
||||||
|
<< type_name(field->type) << std::endl;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::cout << std::endl;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
bool None::DoRotate(string rotated_path, double open, double close, bool terminating)
|
bool None::DoRotate(string rotated_path, double open, double close, bool terminating)
|
||||||
{
|
{
|
||||||
if ( ! FinishedRotation(string("/dev/null"), Path(), open, close, terminating))
|
if ( ! FinishedRotation(string("/dev/null"), Info().path, open, close, terminating))
|
||||||
{
|
{
|
||||||
Error(Fmt("error rotating %s", Path().c_str()));
|
Error(Fmt("error rotating %s", Info().path.c_str()));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -18,8 +18,8 @@ public:
|
||||||
{ return new None(frontend); }
|
{ return new None(frontend); }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
virtual bool DoInit(string path, int num_fields,
|
virtual bool DoInit(const WriterInfo& info, int num_fields,
|
||||||
const threading::Field* const * fields) { return true; }
|
const threading::Field* const * fields);
|
||||||
|
|
||||||
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
||||||
threading::Value** vals) { return true; }
|
threading::Value** vals) { return true; }
|
||||||
|
@ -27,7 +27,7 @@ protected:
|
||||||
virtual bool DoRotate(string rotated_path, double open,
|
virtual bool DoRotate(string rotated_path, double open,
|
||||||
double close, bool terminating);
|
double close, bool terminating);
|
||||||
virtual bool DoFlush() { return true; }
|
virtual bool DoFlush() { return true; }
|
||||||
virtual bool DoFinish() { return true; }
|
virtual bool DoFinish() { WriterBackend::DoFinish(); return true; }
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -108,11 +108,21 @@ bool SQLite::checkError( int code )
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool SQLite::DoInit(string path, int num_fields,
|
bool SQLite::DoInit(const WriterInfo& info, int num_fields,
|
||||||
const Field* const * fields)
|
const Field* const * fields)
|
||||||
{
|
{
|
||||||
|
|
||||||
string fullpath = path+ ".sqlite";
|
string fullpath = info.path+ ".sqlite";
|
||||||
|
string dbname;
|
||||||
|
|
||||||
|
map<string, string>::const_iterator it = info.config.find("dbname");
|
||||||
|
if ( it == info.config.end() ) {
|
||||||
|
MsgThread::Info(Fmt("dbname configuration option not found. Defaulting to path %s", info.path.c_str()));
|
||||||
|
dbname = info.path;
|
||||||
|
} else {
|
||||||
|
dbname = it->second;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
if ( checkError(sqlite3_open_v2(
|
if ( checkError(sqlite3_open_v2(
|
||||||
fullpath.c_str(),
|
fullpath.c_str(),
|
||||||
|
@ -124,7 +134,7 @@ bool SQLite::DoInit(string path, int num_fields,
|
||||||
NULL)) )
|
NULL)) )
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
string create = "CREATE TABLE IF NOT EXISTS "+path+" (\n"; // yes. using path here is stupid. open for better ideas.
|
string create = "CREATE TABLE IF NOT EXISTS "+dbname+" (\n"; // yes. using path here is stupid. open for better ideas.
|
||||||
//"id SERIAL UNIQUE NOT NULL"; // SQLite has rowids, we do not need a counter here.
|
//"id SERIAL UNIQUE NOT NULL"; // SQLite has rowids, we do not need a counter here.
|
||||||
|
|
||||||
for ( int i = 0; i < num_fields; ++i )
|
for ( int i = 0; i < num_fields; ++i )
|
||||||
|
@ -168,7 +178,7 @@ bool SQLite::DoInit(string path, int num_fields,
|
||||||
// create the prepared statement that will be re-used forever...
|
// create the prepared statement that will be re-used forever...
|
||||||
|
|
||||||
string insert = "VALUES (";
|
string insert = "VALUES (";
|
||||||
string names = "INSERT INTO "+path+" ( ";
|
string names = "INSERT INTO "+dbname+" ( ";
|
||||||
|
|
||||||
for ( int i = 0; i < num_fields; i++ )
|
for ( int i = 0; i < num_fields; i++ )
|
||||||
{
|
{
|
||||||
|
|
|
@ -23,7 +23,7 @@ public:
|
||||||
{ return new SQLite(frontend); }
|
{ return new SQLite(frontend); }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
virtual bool DoInit(string path, int num_fields,
|
virtual bool DoInit(const WriterInfo& info, int num_fields,
|
||||||
const threading::Field* const* fields);
|
const threading::Field* const* fields);
|
||||||
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
|
||||||
threading::Value** vals);
|
threading::Value** vals);
|
||||||
|
|
170
src/socks-analyzer.pac
Normal file
170
src/socks-analyzer.pac
Normal file
|
@ -0,0 +1,170 @@
|
||||||
|
|
||||||
|
%header{
|
||||||
|
StringVal* array_to_string(vector<uint8> *a);
|
||||||
|
%}
|
||||||
|
|
||||||
|
%code{
|
||||||
|
StringVal* array_to_string(vector<uint8> *a)
|
||||||
|
{
|
||||||
|
int len = a->size();
|
||||||
|
char tmp[len];
|
||||||
|
char *s = tmp;
|
||||||
|
for ( vector<uint8>::iterator i = a->begin(); i != a->end(); *s++ = *i++ );
|
||||||
|
|
||||||
|
while ( len > 0 && tmp[len-1] == '\0' )
|
||||||
|
--len;
|
||||||
|
|
||||||
|
return new StringVal(len, tmp);
|
||||||
|
}
|
||||||
|
%}
|
||||||
|
|
||||||
|
refine connection SOCKS_Conn += {
|
||||||
|
|
||||||
|
function socks4_request(request: SOCKS4_Request): bool
|
||||||
|
%{
|
||||||
|
RecordVal* sa = new RecordVal(socks_address);
|
||||||
|
sa->Assign(0, new AddrVal(htonl(${request.addr})));
|
||||||
|
if ( ${request.v4a} )
|
||||||
|
sa->Assign(1, array_to_string(${request.name}));
|
||||||
|
|
||||||
|
BifEvent::generate_socks_request(bro_analyzer(),
|
||||||
|
bro_analyzer()->Conn(),
|
||||||
|
4,
|
||||||
|
${request.command},
|
||||||
|
sa,
|
||||||
|
new PortVal(${request.port} | TCP_PORT_MASK),
|
||||||
|
array_to_string(${request.user}));
|
||||||
|
|
||||||
|
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(true);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
%}
|
||||||
|
|
||||||
|
function socks4_reply(reply: SOCKS4_Reply): bool
|
||||||
|
%{
|
||||||
|
RecordVal* sa = new RecordVal(socks_address);
|
||||||
|
sa->Assign(0, new AddrVal(htonl(${reply.addr})));
|
||||||
|
|
||||||
|
BifEvent::generate_socks_reply(bro_analyzer(),
|
||||||
|
bro_analyzer()->Conn(),
|
||||||
|
4,
|
||||||
|
${reply.status},
|
||||||
|
sa,
|
||||||
|
new PortVal(${reply.port} | TCP_PORT_MASK));
|
||||||
|
|
||||||
|
bro_analyzer()->ProtocolConfirmation();
|
||||||
|
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(false);
|
||||||
|
return true;
|
||||||
|
%}
|
||||||
|
|
||||||
|
function socks5_request(request: SOCKS5_Request): bool
|
||||||
|
%{
|
||||||
|
if ( ${request.reserved} != 0 )
|
||||||
|
{
|
||||||
|
bro_analyzer()->ProtocolViolation(fmt("invalid value in reserved field: %d", ${request.reserved}));
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
RecordVal* sa = new RecordVal(socks_address);
|
||||||
|
|
||||||
|
// This is dumb and there must be a better way (checking for presence of a field)...
|
||||||
|
switch ( ${request.remote_name.addr_type} )
|
||||||
|
{
|
||||||
|
case 1:
|
||||||
|
sa->Assign(0, new AddrVal(htonl(${request.remote_name.ipv4})));
|
||||||
|
break;
|
||||||
|
|
||||||
|
case 3:
|
||||||
|
sa->Assign(1, new StringVal(${request.remote_name.domain_name.name}.length(),
|
||||||
|
(const char*) ${request.remote_name.domain_name.name}.data()));
|
||||||
|
break;
|
||||||
|
|
||||||
|
case 4:
|
||||||
|
sa->Assign(0, new AddrVal(IPAddr(IPv6, (const uint32_t*) ${request.remote_name.ipv6}, IPAddr::Network)));
|
||||||
|
break;
|
||||||
|
|
||||||
|
default:
|
||||||
|
bro_analyzer()->ProtocolViolation(fmt("invalid SOCKSv5 addr type: %d", ${request.remote_name.addr_type}));
|
||||||
|
return false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
BifEvent::generate_socks_request(bro_analyzer(),
|
||||||
|
bro_analyzer()->Conn(),
|
||||||
|
5,
|
||||||
|
${request.command},
|
||||||
|
sa,
|
||||||
|
new PortVal(${request.port} | TCP_PORT_MASK),
|
||||||
|
new StringVal(""));
|
||||||
|
|
||||||
|
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(true);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
%}
|
||||||
|
|
||||||
|
function socks5_reply(reply: SOCKS5_Reply): bool
|
||||||
|
%{
|
||||||
|
RecordVal* sa = new RecordVal(socks_address);
|
||||||
|
|
||||||
|
// This is dumb and there must be a better way (checking for presence of a field)...
|
||||||
|
switch ( ${reply.bound.addr_type} )
|
||||||
|
{
|
||||||
|
case 1:
|
||||||
|
sa->Assign(0, new AddrVal(htonl(${reply.bound.ipv4})));
|
||||||
|
break;
|
||||||
|
|
||||||
|
case 3:
|
||||||
|
sa->Assign(1, new StringVal(${reply.bound.domain_name.name}.length(),
|
||||||
|
(const char*) ${reply.bound.domain_name.name}.data()));
|
||||||
|
break;
|
||||||
|
|
||||||
|
case 4:
|
||||||
|
sa->Assign(0, new AddrVal(IPAddr(IPv6, (const uint32_t*) ${reply.bound.ipv6}, IPAddr::Network)));
|
||||||
|
break;
|
||||||
|
|
||||||
|
default:
|
||||||
|
bro_analyzer()->ProtocolViolation(fmt("invalid SOCKSv5 addr type: %d", ${reply.bound.addr_type}));
|
||||||
|
return false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
BifEvent::generate_socks_reply(bro_analyzer(),
|
||||||
|
bro_analyzer()->Conn(),
|
||||||
|
5,
|
||||||
|
${reply.reply},
|
||||||
|
sa,
|
||||||
|
new PortVal(${reply.port} | TCP_PORT_MASK));
|
||||||
|
|
||||||
|
bro_analyzer()->ProtocolConfirmation();
|
||||||
|
static_cast<SOCKS_Analyzer*>(bro_analyzer())->EndpointDone(false);
|
||||||
|
return true;
|
||||||
|
%}
|
||||||
|
|
||||||
|
function version_error(version: uint8): bool
|
||||||
|
%{
|
||||||
|
bro_analyzer()->ProtocolViolation(fmt("unsupported/unknown SOCKS version %d", version));
|
||||||
|
return true;
|
||||||
|
%}
|
||||||
|
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
refine typeattr SOCKS_Version_Error += &let {
|
||||||
|
proc: bool = $context.connection.version_error(version);
|
||||||
|
};
|
||||||
|
|
||||||
|
refine typeattr SOCKS4_Request += &let {
|
||||||
|
proc: bool = $context.connection.socks4_request(this);
|
||||||
|
};
|
||||||
|
|
||||||
|
refine typeattr SOCKS4_Reply += &let {
|
||||||
|
proc: bool = $context.connection.socks4_reply(this);
|
||||||
|
};
|
||||||
|
|
||||||
|
refine typeattr SOCKS5_Request += &let {
|
||||||
|
proc: bool = $context.connection.socks5_request(this);
|
||||||
|
};
|
||||||
|
|
||||||
|
refine typeattr SOCKS5_Reply += &let {
|
||||||
|
proc: bool = $context.connection.socks5_reply(this);
|
||||||
|
};
|
119
src/socks-protocol.pac
Normal file
119
src/socks-protocol.pac
Normal file
|
@ -0,0 +1,119 @@
|
||||||
|
|
||||||
|
type SOCKS_Version(is_orig: bool) = record {
|
||||||
|
version: uint8;
|
||||||
|
msg: case version of {
|
||||||
|
4 -> socks4_msg: SOCKS4_Message(is_orig);
|
||||||
|
5 -> socks5_msg: SOCKS5_Message(is_orig);
|
||||||
|
default -> socks_msg_fail: SOCKS_Version_Error(version);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS_Version_Error(version: uint8) = record {
|
||||||
|
nothing: empty;
|
||||||
|
};
|
||||||
|
|
||||||
|
# SOCKS5 Implementation
|
||||||
|
type SOCKS5_Message(is_orig: bool) = case $context.connection.v5_past_authentication() of {
|
||||||
|
true -> msg: SOCKS5_Real_Message(is_orig);
|
||||||
|
false -> auth: SOCKS5_Auth_Negotiation(is_orig);
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS5_Auth_Negotiation(is_orig: bool) = case is_orig of {
|
||||||
|
true -> req: SOCKS5_Auth_Negotiation_Request;
|
||||||
|
false -> rep: SOCKS5_Auth_Negotiation_Reply;
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS5_Auth_Negotiation_Request = record {
|
||||||
|
method_count: uint8;
|
||||||
|
methods: uint8[method_count];
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS5_Auth_Negotiation_Reply = record {
|
||||||
|
selected_auth_method: uint8;
|
||||||
|
} &let {
|
||||||
|
past_auth = $context.connection.set_v5_past_authentication();
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS5_Real_Message(is_orig: bool) = case is_orig of {
|
||||||
|
true -> request: SOCKS5_Request;
|
||||||
|
false -> reply: SOCKS5_Reply;
|
||||||
|
};
|
||||||
|
|
||||||
|
type Domain_Name = record {
|
||||||
|
len: uint8;
|
||||||
|
name: bytestring &length=len;
|
||||||
|
} &byteorder = bigendian;
|
||||||
|
|
||||||
|
type SOCKS5_Address = record {
|
||||||
|
addr_type: uint8;
|
||||||
|
addr: case addr_type of {
|
||||||
|
1 -> ipv4: uint32;
|
||||||
|
3 -> domain_name: Domain_Name;
|
||||||
|
4 -> ipv6: uint32[4];
|
||||||
|
default -> err: bytestring &restofdata &transient;
|
||||||
|
};
|
||||||
|
} &byteorder = bigendian;
|
||||||
|
|
||||||
|
type SOCKS5_Request = record {
|
||||||
|
command: uint8;
|
||||||
|
reserved: uint8;
|
||||||
|
remote_name: SOCKS5_Address;
|
||||||
|
port: uint16;
|
||||||
|
} &byteorder = bigendian;
|
||||||
|
|
||||||
|
type SOCKS5_Reply = record {
|
||||||
|
reply: uint8;
|
||||||
|
reserved: uint8;
|
||||||
|
bound: SOCKS5_Address;
|
||||||
|
port: uint16;
|
||||||
|
} &byteorder = bigendian;
|
||||||
|
|
||||||
|
|
||||||
|
# SOCKS4 Implementation
|
||||||
|
type SOCKS4_Message(is_orig: bool) = case is_orig of {
|
||||||
|
true -> request: SOCKS4_Request;
|
||||||
|
false -> reply: SOCKS4_Reply;
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS4_Request = record {
|
||||||
|
command: uint8;
|
||||||
|
port: uint16;
|
||||||
|
addr: uint32;
|
||||||
|
user: uint8[] &until($element == 0);
|
||||||
|
host: case v4a of {
|
||||||
|
true -> name: uint8[] &until($element == 0); # v4a
|
||||||
|
false -> empty: uint8[] &length=0;
|
||||||
|
} &requires(v4a);
|
||||||
|
} &byteorder = bigendian &let {
|
||||||
|
v4a: bool = (addr <= 0x000000ff);
|
||||||
|
};
|
||||||
|
|
||||||
|
type SOCKS4_Reply = record {
|
||||||
|
zero: uint8;
|
||||||
|
status: uint8;
|
||||||
|
port: uint16;
|
||||||
|
addr: uint32;
|
||||||
|
} &byteorder = bigendian;
|
||||||
|
|
||||||
|
|
||||||
|
refine connection SOCKS_Conn += {
|
||||||
|
%member{
|
||||||
|
bool v5_authenticated_;
|
||||||
|
%}
|
||||||
|
|
||||||
|
%init{
|
||||||
|
v5_authenticated_ = false;
|
||||||
|
%}
|
||||||
|
|
||||||
|
function v5_past_authentication(): bool
|
||||||
|
%{
|
||||||
|
return v5_authenticated_;
|
||||||
|
%}
|
||||||
|
|
||||||
|
function set_v5_past_authentication(): bool
|
||||||
|
%{
|
||||||
|
v5_authenticated_ = true;
|
||||||
|
return true;
|
||||||
|
%}
|
||||||
|
};
|
||||||
|
|
24
src/socks.pac
Normal file
24
src/socks.pac
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
%include binpac.pac
|
||||||
|
%include bro.pac
|
||||||
|
|
||||||
|
%extern{
|
||||||
|
#include "SOCKS.h"
|
||||||
|
%}
|
||||||
|
|
||||||
|
analyzer SOCKS withcontext {
|
||||||
|
connection: SOCKS_Conn;
|
||||||
|
flow: SOCKS_Flow;
|
||||||
|
};
|
||||||
|
|
||||||
|
connection SOCKS_Conn(bro_analyzer: BroAnalyzer) {
|
||||||
|
upflow = SOCKS_Flow(true);
|
||||||
|
downflow = SOCKS_Flow(false);
|
||||||
|
};
|
||||||
|
|
||||||
|
%include socks-protocol.pac
|
||||||
|
|
||||||
|
flow SOCKS_Flow(is_orig: bool) {
|
||||||
|
datagram = SOCKS_Version(is_orig) withcontext(connection, this);
|
||||||
|
};
|
||||||
|
|
||||||
|
%include socks-analyzer.pac
|
|
@ -170,6 +170,17 @@ enum ID %{
|
||||||
Unknown,
|
Unknown,
|
||||||
%}
|
%}
|
||||||
|
|
||||||
|
module Tunnel;
|
||||||
|
enum Type %{
|
||||||
|
NONE,
|
||||||
|
IP,
|
||||||
|
AYIYA,
|
||||||
|
TEREDO,
|
||||||
|
SOCKS,
|
||||||
|
%}
|
||||||
|
|
||||||
|
type EncapsulatingConn: record;
|
||||||
|
|
||||||
module Input;
|
module Input;
|
||||||
|
|
||||||
enum Reader %{
|
enum Reader %{
|
||||||
|
|
25
src/util.cc
25
src/util.cc
|
@ -1082,18 +1082,8 @@ const char* log_file_name(const char* tag)
|
||||||
return fmt("%s.%s", tag, (env ? env : "log"));
|
return fmt("%s.%s", tag, (env ? env : "log"));
|
||||||
}
|
}
|
||||||
|
|
||||||
double calc_next_rotate(double interval, const char* rotate_base_time)
|
double parse_rotate_base_time(const char* rotate_base_time)
|
||||||
{
|
{
|
||||||
double current = network_time;
|
|
||||||
|
|
||||||
// Calculate start of day.
|
|
||||||
time_t teatime = time_t(current);
|
|
||||||
|
|
||||||
struct tm t;
|
|
||||||
t = *localtime(&teatime);
|
|
||||||
t.tm_hour = t.tm_min = t.tm_sec = 0;
|
|
||||||
double startofday = mktime(&t);
|
|
||||||
|
|
||||||
double base = -1;
|
double base = -1;
|
||||||
|
|
||||||
if ( rotate_base_time && rotate_base_time[0] != '\0' )
|
if ( rotate_base_time && rotate_base_time[0] != '\0' )
|
||||||
|
@ -1105,6 +1095,19 @@ double calc_next_rotate(double interval, const char* rotate_base_time)
|
||||||
base = t.tm_min * 60 + t.tm_hour * 60 * 60;
|
base = t.tm_min * 60 + t.tm_hour * 60 * 60;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return base;
|
||||||
|
}
|
||||||
|
|
||||||
|
double calc_next_rotate(double current, double interval, double base)
|
||||||
|
{
|
||||||
|
// Calculate start of day.
|
||||||
|
time_t teatime = time_t(current);
|
||||||
|
|
||||||
|
struct tm t;
|
||||||
|
t = *localtime_r(&teatime, &t);
|
||||||
|
t.tm_hour = t.tm_min = t.tm_sec = 0;
|
||||||
|
double startofday = mktime(&t);
|
||||||
|
|
||||||
if ( base < 0 )
|
if ( base < 0 )
|
||||||
// No base time given. To get nice timestamps, we round
|
// No base time given. To get nice timestamps, we round
|
||||||
// the time up to the next multiple of the rotation interval.
|
// the time up to the next multiple of the rotation interval.
|
||||||
|
|
17
src/util.h
17
src/util.h
|
@ -193,9 +193,22 @@ extern FILE* rotate_file(const char* name, RecordVal* rotate_info);
|
||||||
// This mimics the script-level function with the same name.
|
// This mimics the script-level function with the same name.
|
||||||
const char* log_file_name(const char* tag);
|
const char* log_file_name(const char* tag);
|
||||||
|
|
||||||
|
// Parse a time string of the form "HH:MM" (as used for the rotation base
|
||||||
|
// time) into a double representing the number of seconds. Returns -1 if the
|
||||||
|
// string cannot be parsed. The function's result is intended to be used with
|
||||||
|
// calc_next_rotate().
|
||||||
|
//
|
||||||
|
// This function is not thread-safe.
|
||||||
|
double parse_rotate_base_time(const char* rotate_base_time);
|
||||||
|
|
||||||
// Calculate the duration until the next time a file is to be rotated, based
|
// Calculate the duration until the next time a file is to be rotated, based
|
||||||
// on the given rotate_interval and rotate_base_time.
|
// on the given rotate_interval and rotate_base_time. 'current' the the
|
||||||
double calc_next_rotate(double rotate_interval, const char* rotate_base_time);
|
// current time to be used as base, 'rotate_interval' the rotation interval,
|
||||||
|
// and 'base' the value returned by parse_rotate_base_time(). For the latter,
|
||||||
|
// if the function returned -1, that's fine, calc_next_rotate() handles that.
|
||||||
|
//
|
||||||
|
// This function is thread-safe.
|
||||||
|
double calc_next_rotate(double current, double rotate_interval, double base);
|
||||||
|
|
||||||
// Terminates processing gracefully, similar to pressing CTRL-C.
|
// Terminates processing gracefully, similar to pressing CTRL-C.
|
||||||
void terminate_processing();
|
void terminate_processing();
|
||||||
|
|
18
testing/btest/Baseline/core.expr-exception/output
Normal file
18
testing/btest/Baseline/core.expr-exception/output
Normal file
|
@ -0,0 +1,18 @@
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp]
|
||||||
|
ftp field missing
|
||||||
|
[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp]
|
|
@ -5,12 +5,12 @@
|
||||||
#path reporter
|
#path reporter
|
||||||
#fields ts level message location
|
#fields ts level message location
|
||||||
#types time enum string string
|
#types time enum string string
|
||||||
1300475168.783842 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.783842 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.915940 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.915940 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.916118 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.916118 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.918295 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.918295 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.952193 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.952193 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.952228 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.952228 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.954761 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.954761 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475168.962628 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475168.962628 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
1300475169.780331 Reporter::ERROR field value missing [c$ftp] /Users/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8
|
1300475169.780331 Reporter::ERROR field value missing [c$ftp] /home/jsiwek/bro/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
|
||||||
|
|
15
testing/btest/Baseline/core.leaks.ayiya/conn.log
Normal file
15
testing/btest/Baseline/core.leaks.ayiya/conn.log
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
#separator \x09
|
||||||
|
#set_separator ,
|
||||||
|
#empty_field (empty)
|
||||||
|
#unset_field -
|
||||||
|
#path conn
|
||||||
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes parents
|
||||||
|
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
|
||||||
|
1257655301.595604 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 tcp http 2.101052 2981 4665 S1 - 0 ShADad 10 3605 11 5329 k6kgXLOoSKl
|
||||||
|
1257655296.585034 k6kgXLOoSKl 192.168.3.101 53859 216.14.98.22 5072 udp ayiya 20.879001 5129 6109 SF - 0 Dd 21 5717 13 6473 (empty)
|
||||||
|
1257655293.629048 UWkUyAuUGXf 192.168.3.101 53796 216.14.98.22 5072 udp ayiya - - - SHR - 0 d 0 0 1 176 (empty)
|
||||||
|
1257655296.585333 FrJExwHcSal :: 135 ff02::1:ff00:2 136 icmp - - - - OTH - 0 - 1 64 0 0 k6kgXLOoSKl
|
||||||
|
1257655293.629048 arKYeMETxOg 2001:4978:f:4c::1 128 2001:4978:f:4c::2 129 icmp - 23.834987 168 56 OTH - 0 - 3 312 1 104 UWkUyAuUGXf,k6kgXLOoSKl
|
||||||
|
1257655296.585188 TEfuqmmG4bh fe80::216:cbff:fe9a:4cb9 131 ff02::1:ff00:2 130 icmp - 0.919988 32 0 OTH - 0 - 2 144 0 0 k6kgXLOoSKl
|
||||||
|
1257655296.585151 j4u32Pc5bif fe80::216:cbff:fe9a:4cb9 131 ff02::2:f901:d225 130 icmp - 0.719947 32 0 OTH - 0 - 2 144 0 0 k6kgXLOoSKl
|
||||||
|
1257655296.585034 nQcgTWjvg4c fe80::216:cbff:fe9a:4cb9 131 ff02::1:ff9a:4cb9 130 icmp - 4.922880 32 0 OTH - 0 - 2 144 0 0 k6kgXLOoSKl
|
10
testing/btest/Baseline/core.leaks.ayiya/http.log
Normal file
10
testing/btest/Baseline/core.leaks.ayiya/http.log
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
#separator \x09
|
||||||
|
#set_separator ,
|
||||||
|
#empty_field (empty)
|
||||||
|
#unset_field -
|
||||||
|
#path http
|
||||||
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file
|
||||||
|
#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file
|
||||||
|
1257655301.652206 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 1 GET ipv6.google.com / - Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en; rv:1.9.0.15pre) Gecko/2009091516 Camino/2.0b4 (like Firefox/3.0.15pre) 0 10102 200 OK - - - (empty) - - - text/html - -
|
||||||
|
1257655302.514424 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 2 GET ipv6.google.com /csi?v=3&s=webhp&action=&tran=undefined&e=17259,19771,21517,21766,21887,22212&ei=BUz2Su7PMJTglQfz3NzCAw&rt=prt.77,xjs.565,ol.645 http://ipv6.google.com/ Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en; rv:1.9.0.15pre) Gecko/2009091516 Camino/2.0b4 (like Firefox/3.0.15pre) 0 0 204 No Content - - - (empty) - - - - - -
|
||||||
|
1257655303.603569 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 3 GET ipv6.google.com /gen_204?atyp=i&ct=fade&cad=1254&ei=BUz2Su7PMJTglQfz3NzCAw&zx=1257655303600 http://ipv6.google.com/ Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en; rv:1.9.0.15pre) Gecko/2009091516 Camino/2.0b4 (like Firefox/3.0.15pre) 0 0 204 No Content - - - (empty) - - - - - -
|
11
testing/btest/Baseline/core.leaks.ayiya/tunnel.log
Normal file
11
testing/btest/Baseline/core.leaks.ayiya/tunnel.log
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
#separator \x09
|
||||||
|
#set_separator ,
|
||||||
|
#empty_field (empty)
|
||||||
|
#unset_field -
|
||||||
|
#path tunnel
|
||||||
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p action tunnel_type
|
||||||
|
#types time string addr port addr port enum enum
|
||||||
|
1257655293.629048 UWkUyAuUGXf 192.168.3.101 53796 216.14.98.22 5072 Tunnel::DISCOVER Tunnel::AYIYA
|
||||||
|
1257655296.585034 k6kgXLOoSKl 192.168.3.101 53859 216.14.98.22 5072 Tunnel::DISCOVER Tunnel::AYIYA
|
||||||
|
1257655317.464035 k6kgXLOoSKl 192.168.3.101 53859 216.14.98.22 5072 Tunnel::CLOSE Tunnel::AYIYA
|
||||||
|
1257655317.464035 UWkUyAuUGXf 192.168.3.101 53796 216.14.98.22 5072 Tunnel::CLOSE Tunnel::AYIYA
|
13
testing/btest/Baseline/core.leaks.ip-in-ip/output
Normal file
13
testing/btest/Baseline/core.leaks.ip-in-ip/output
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
new_connection: tunnel
|
||||||
|
conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp]
|
||||||
|
encap: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]]
|
||||||
|
new_connection: tunnel
|
||||||
|
conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp]
|
||||||
|
encap: [[cid=[orig_h=feed::beef, orig_p=0/unknown, resp_h=feed::cafe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf], [cid=[orig_h=babe::beef, orig_p=0/unknown, resp_h=dead::babe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=arKYeMETxOg]]
|
||||||
|
new_connection: tunnel
|
||||||
|
conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp]
|
||||||
|
encap: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]]
|
||||||
|
tunnel_changed:
|
||||||
|
conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp]
|
||||||
|
old: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]]
|
||||||
|
new: [[cid=[orig_h=feed::beef, orig_p=0/unknown, resp_h=feed::cafe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=k6kgXLOoSKl]]
|
28
testing/btest/Baseline/core.leaks.teredo/conn.log
Normal file
28
testing/btest/Baseline/core.leaks.teredo/conn.log
Normal file
|
@ -0,0 +1,28 @@
|
||||||
|
#separator \x09
|
||||||
|
#set_separator ,
|
||||||
|
#empty_field (empty)
|
||||||
|
#unset_field -
|
||||||
|
#path conn
|
||||||
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes parents
|
||||||
|
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
|
||||||
|
1210953047.736921 arKYeMETxOg 192.168.2.16 1576 75.126.130.163 80 tcp - 0.000357 0 0 SHR - 0 fA 1 40 1 40 (empty)
|
||||||
|
1210953050.867067 k6kgXLOoSKl 192.168.2.16 1577 75.126.203.78 80 tcp - 0.000387 0 0 SHR - 0 fA 1 40 1 40 (empty)
|
||||||
|
1210953057.833364 5OKnoww6xl4 192.168.2.16 1577 75.126.203.78 80 tcp - 0.079208 0 0 SH - 0 Fa 1 40 1 40 (empty)
|
||||||
|
1210953058.007081 VW0XPVINV8a 192.168.2.16 1576 75.126.130.163 80 tcp - - - - RSTOS0 - 0 R 1 40 0 0 (empty)
|
||||||
|
1210953057.834454 3PKsZ2Uye21 192.168.2.16 1578 75.126.203.78 80 tcp http 0.407908 790 171 RSTO - 0 ShADadR 6 1038 4 335 (empty)
|
||||||
|
1210953058.350065 fRFu0wcOle6 192.168.2.16 1920 192.168.2.1 53 udp dns 0.223055 66 438 SF - 0 Dd 2 122 2 494 (empty)
|
||||||
|
1210953058.577231 qSsw6ESzHV4 192.168.2.16 137 192.168.2.255 137 udp dns 1.499261 150 0 S0 - 0 D 3 234 0 0 (empty)
|
||||||
|
1210953074.264819 Tw8jXtpTGu6 192.168.2.16 1920 192.168.2.1 53 udp dns 0.297723 123 598 SF - 0 Dd 3 207 3 682 (empty)
|
||||||
|
1210953061.312379 70MGiRM1Qf4 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 tcp http 12.810848 1675 10467 S1 - 0 ShADad 10 2279 12 11191 GSxOnSLghOa
|
||||||
|
1210953076.058333 EAr0uf4mhq 192.168.2.16 1578 75.126.203.78 80 tcp - - - - RSTRH - 0 r 0 0 1 40 (empty)
|
||||||
|
1210953074.055744 h5DsfNtYzi1 192.168.2.16 1577 75.126.203.78 80 tcp - - - - RSTRH - 0 r 0 0 1 40 (empty)
|
||||||
|
1210953074.057124 P654jzLoe3a 192.168.2.16 1576 75.126.130.163 80 tcp - - - - RSTRH - 0 r 0 0 1 40 (empty)
|
||||||
|
1210953074.570439 c4Zw9TmAE05 192.168.2.16 1580 67.228.110.120 80 tcp http 0.466677 469 3916 SF - 0 ShADadFf 7 757 6 4164 (empty)
|
||||||
|
1210953052.202579 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 udp teredo 8.928880 129 48 SF - 0 Dd 2 185 1 76 (empty)
|
||||||
|
1210953060.829233 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 udp teredo 13.293994 2359 11243 SF - 0 Dd 12 2695 13 11607 (empty)
|
||||||
|
1210953058.933954 iE6yhOq3SF 0.0.0.0 68 255.255.255.255 67 udp - - - - S0 - 0 D 1 328 0 0 (empty)
|
||||||
|
1210953052.324629 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 udp teredo - - - SHR - 0 d 0 0 1 137 (empty)
|
||||||
|
1210953046.591933 UWkUyAuUGXf 192.168.2.16 138 192.168.2.255 138 udp - 28.448321 416 0 S0 - 0 D 2 472 0 0 (empty)
|
||||||
|
1210953052.324629 FrJExwHcSal fe80::8000:f227:bec8:61af 134 fe80::8000:ffff:ffff:fffd 133 icmp - - - - OTH - 0 - 1 88 0 0 TEfuqmmG4bh
|
||||||
|
1210953060.829303 qCaWGmzFtM5 2001:0:4137:9e50:8000:f12a:b9c8:2815 128 2001:4860:0:2001::68 129 icmp - 0.463615 4 4 OTH - 0 - 1 52 1 52 GSxOnSLghOa,nQcgTWjvg4c
|
||||||
|
1210953052.202579 j4u32Pc5bif fe80::8000:ffff:ffff:fffd 133 ff02::2 134 icmp - - - - OTH - 0 - 1 64 0 0 nQcgTWjvg4c
|
11
testing/btest/Baseline/core.leaks.teredo/http.log
Normal file
11
testing/btest/Baseline/core.leaks.teredo/http.log
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
#separator \x09
|
||||||
|
#set_separator ,
|
||||||
|
#empty_field (empty)
|
||||||
|
#unset_field -
|
||||||
|
#path http
|
||||||
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file
|
||||||
|
#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file
|
||||||
|
1210953057.917183 3PKsZ2Uye21 192.168.2.16 1578 75.126.203.78 80 1 POST download913.avast.com /cgi-bin/iavs4stats.cgi - Syncer/4.80 (av_pro-1169;f) 589 0 204 <empty> - - - (empty) - - - text/plain - -
|
||||||
|
1210953061.585996 70MGiRM1Qf4 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 1 GET ipv6.google.com / - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 6640 200 OK - - - (empty) - - - text/html - -
|
||||||
|
1210953073.381474 70MGiRM1Qf4 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 2 GET ipv6.google.com /search?hl=en&q=Wireshark+!&btnG=Google+Search http://ipv6.google.com/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 25119 200 OK - - - (empty) - - - text/html - -
|
||||||
|
1210953074.674817 c4Zw9TmAE05 192.168.2.16 1580 67.228.110.120 80 1 GET www.wireshark.org / http://ipv6.google.com/search?hl=en&q=Wireshark+%21&btnG=Google+Search Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 11845 200 OK - - - (empty) - - - text/xml - -
|
83
testing/btest/Baseline/core.leaks.teredo/output
Normal file
83
testing/btest/Baseline/core.leaks.teredo/output
Normal file
|
@ -0,0 +1,83 @@
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::8000:ffff:ffff:fffd, dst=ff02::2, exts=[]]
|
||||||
|
auth: [id=, value=, nonce=14796129349558001544, confirm=0]
|
||||||
|
auth: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::8000:ffff:ffff:fffd, dst=ff02::2, exts=[]]
|
||||||
|
auth: [id=, value=, nonce=14796129349558001544, confirm=0]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]]
|
||||||
|
auth: [id=, value=, nonce=14796129349558001544, confirm=0]
|
||||||
|
origin: [p=3797/udp, a=70.55.215.234]
|
||||||
|
auth: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]]
|
||||||
|
auth: [id=, value=, nonce=14796129349558001544, confirm=0]
|
||||||
|
origin: [p=3797/udp, a=70.55.215.234]
|
||||||
|
origin: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]]
|
||||||
|
auth: [id=, value=, nonce=14796129349558001544, confirm=0]
|
||||||
|
origin: [p=3797/udp, a=70.55.215.234]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=12, nxt=58, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
origin: [p=32900/udp, a=83.170.1.38]
|
||||||
|
origin: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
origin: [p=32900/udp, a=83.170.1.38]
|
||||||
|
bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
origin: [p=32900/udp, a=83.170.1.38]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=fe80::708d:fe83:4114:a512, exts=[]]
|
||||||
|
bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=fe80::708d:fe83:4114:a512, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=12, nxt=58, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=24, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=24, nxt=6, hlim=245, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=817, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=514, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=898, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=812, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=717, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]]
|
||||||
|
packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp]
|
||||||
|
ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]]
|
13
testing/btest/Baseline/core.leaks.teredo/tunnel.log
Normal file
13
testing/btest/Baseline/core.leaks.teredo/tunnel.log
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
#separator \x09
|
||||||
|
#set_separator ,
|
||||||
|
#empty_field (empty)
|
||||||
|
#unset_field -
|
||||||
|
#path tunnel
|
||||||
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p action tunnel_type
|
||||||
|
#types time string addr port addr port enum enum
|
||||||
|
1210953052.202579 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 Tunnel::DISCOVER Tunnel::TEREDO
|
||||||
|
1210953052.324629 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 Tunnel::DISCOVER Tunnel::TEREDO
|
||||||
|
1210953061.292918 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 Tunnel::DISCOVER Tunnel::TEREDO
|
||||||
|
1210953076.058333 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 Tunnel::CLOSE Tunnel::TEREDO
|
||||||
|
1210953076.058333 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 Tunnel::CLOSE Tunnel::TEREDO
|
||||||
|
1210953076.058333 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 Tunnel::CLOSE Tunnel::TEREDO
|
|
@ -3,6 +3,6 @@
|
||||||
#empty_field (empty)
|
#empty_field (empty)
|
||||||
#unset_field -
|
#unset_field -
|
||||||
#path conn
|
#path conn
|
||||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes
|
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||||
#types time string addr port addr port enum string interval count count string bool count string count count count count
|
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
|
||||||
1128727435.450898 UWkUyAuUGXf 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945
|
1128727435.450898 UWkUyAuUGXf 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945 (empty)
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
#path packet_filter
|
#path packet_filter
|
||||||
#fields ts node filter init success
|
#fields ts node filter init success
|
||||||
#types time string string bool bool
|
#types time string string bool bool
|
||||||
1328294052.330721 - ip or not ip T T
|
1340229717.179155 - ip or not ip T T
|
||||||
#separator \x09
|
#separator \x09
|
||||||
#set_separator ,
|
#set_separator ,
|
||||||
#empty_field (empty)
|
#empty_field (empty)
|
||||||
|
@ -13,7 +13,7 @@
|
||||||
#path packet_filter
|
#path packet_filter
|
||||||
#fields ts node filter init success
|
#fields ts node filter init success
|
||||||
#types time string string bool bool
|
#types time string string bool bool
|
||||||
1328294052.542418 - ((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (udp and port 5355)) or (tcp port 22)) or (tcp port 995)) or (port 21)) or (tcp port 25 or tcp port 587)) or (port 6667)) or (tcp port 614)) or (tcp port 990)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T
|
1340229717.462355 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (tcp port 1080)) or (udp and port 5355)) or (tcp port 22)) or (tcp port 995)) or (port 21)) or (tcp port 25 or tcp port 587)) or (port 6667)) or (tcp port 614)) or (tcp port 990)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T
|
||||||
#separator \x09
|
#separator \x09
|
||||||
#set_separator ,
|
#set_separator ,
|
||||||
#empty_field (empty)
|
#empty_field (empty)
|
||||||
|
@ -21,7 +21,7 @@
|
||||||
#path packet_filter
|
#path packet_filter
|
||||||
#fields ts node filter init success
|
#fields ts node filter init success
|
||||||
#types time string string bool bool
|
#types time string string bool bool
|
||||||
1328294052.748480 - port 42 T T
|
1340229717.733007 - port 42 T T
|
||||||
#separator \x09
|
#separator \x09
|
||||||
#set_separator ,
|
#set_separator ,
|
||||||
#empty_field (empty)
|
#empty_field (empty)
|
||||||
|
@ -29,4 +29,4 @@
|
||||||
#path packet_filter
|
#path packet_filter
|
||||||
#fields ts node filter init success
|
#fields ts node filter init success
|
||||||
#types time string string bool bool
|
#types time string string bool bool
|
||||||
1328294052.952845 - port 56730 T T
|
1340229718.001009 - port 56730 T T
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue