Merge remote-tracking branch 'origin/master' into topic/johanna/netcontrol-improvements

This commit is contained in:
Johanna Amann 2016-05-19 16:17:07 -07:00
commit 52d694f3bd
175 changed files with 1745 additions and 696 deletions

77
CHANGES
View file

@ -1,4 +1,81 @@
2.4-569 | 2016-05-18 07:39:35 -0700
* DTLS: Use magix constant from RFC 5389 for STUN detection.
(Johanna Amann)
* DTLS: Fix binpac bug with DTLSv1.2 client hellos. (Johanna Amann)
* DTLS: Fix interaction with STUN. Now the DTLS analyzer cleanly
skips all STUN messages. (Johanna Amann)
* Fix the way that child analyzers are added. (Johanna Amann)
2.4-563 | 2016-05-17 16:25:21 -0700
* Fix duplication of new_connection_contents event. Addresses
BIT-1602 (Johanna Amann)
* SMTP: Support SSL upgrade via X-ANONYMOUSTLS This seems to be a
non-standardized microsoft extension that, besides having a
different name, works pretty much the same as StartTLS. We just
treat it as such. (Johanna Amann)
* Fixing control framework's net_stats and peer_status commands. For
the latter, this removes most of the values returned, as we don't
have access to them anymore. (Robin Sommer)
2.4-555 | 2016-05-16 20:10:15 -0700
* Fix failing plugin tests on OS X 10.11. (Daniel Thayer)
* Fix failing test on Debian/FreeBSD. (Johanna Amann)
2.4-552 | 2016-05-12 08:04:33 -0700
* Fix a bug in receiving remote logs via broker. (Daniel Thayer)
* Fix Bro and unit tests when broker is not enabled. (Daniel Thayer)
* Added interpreter error for local event variables. (Jan Grashoefer)
2.4-544 | 2016-05-07 12:19:07 -0700
* Switching all use of gmtime and localtime to use reentrant
variants. (Seth Hall)
2.4-541 | 2016-05-06 17:58:45 -0700
* A set of new built-in function for gathering execution statistics:
get_net_stats(), get_conn_stats(), get_proc_stats(),
get_event_stats(), get_reassembler_stats(), get_dns_stats(),
get_timer_stats(), get_file_analysis_stats(), get_thread_stats(),
get_gap_stats(), get_matcher_stats().
net_stats() resource_usage() have been superseded by these. (Seth
Hall)
* New policy script misc/stats.bro that records Bro execution
statistics in a standard Bro log file. (Seth Hall)
* A series of documentation improvements. (Daniel Thayer)
* Rudimentary XMPP StartTLS analyzer. It parses certificates out of
XMPP connections using StartTLS. It aborts processing if StartTLS
is not found. (Johanna Amann)
2.4-507 | 2016-05-03 11:18:16 -0700
* Fix incorrect type tags in Bro broker source code. These are just
used for error reporting. (Daniel Thayer)
* Update docs and tests of the fmt() function. (Daniel Thayer)
2.4-500 | 2016-05-03 11:16:50 -0700
* Updating submodule(s).
2.4-498 | 2016-04-28 11:34:52 -0700 2.4-498 | 2016-04-28 11:34:52 -0700
* Rename Broker::print to Broker::send_print and Broker::event to * Rename Broker::print to Broker::send_print and Broker::event to

21
NEWS
View file

@ -33,14 +33,17 @@ New Functionality
- Bro now supports the Radiotap header for 802.11 frames. - Bro now supports the Radiotap header for 802.11 frames.
- Bro now has a rudimentary IMAP analyzer examinig the initial phase - Bro now has rudimentary IMAP and XMPP analyzers examinig the initial
of the protocol. Right now the analyzer only identify STARTTLS phases of the protocol. Right now these analyzer only identify
sessions, handing them over to TLS analysis. The analyzer does not STARTTLS sessions, handing them over to TLS analysis. The analyzer
yet analyze any further IMAP content. does not yet analyze any further IMAP/XMPP content.
- Bro now tracks VLAN IDs. To record them inside the connection log, - Bro now tracks VLAN IDs. To record them inside the connection log,
load protocols/conn/vlan-logging.bro. load protocols/conn/vlan-logging.bro.
- The new misc/stats.bro records Bro executions statistics in a
standard Bro log file.
- A new dns_CAA_reply event gives access to DNS Certification Authority - A new dns_CAA_reply event gives access to DNS Certification Authority
Authorization replies. Authorization replies.
@ -83,6 +86,13 @@ New Functionality
- The IRC analyzer now recognizes StartTLS sessions and enable the SSL - The IRC analyzer now recognizes StartTLS sessions and enable the SSL
analyzer for them. analyzer for them.
- A set of new built-in function for gathering execution statistics:
get_net_stats(), get_conn_stats(), get_proc_stats(),
get_event_stats(), get_reassembler_stats(), get_dns_stats(),
get_timer_stats(), get_file_analysis_stats(), get_thread_stats(),
get_gap_stats(), get_matcher_stats(),
- New Bro plugins in aux/plugins: - New Bro plugins in aux/plugins:
- af_packet: Native AF_PACKET support. - af_packet: Native AF_PACKET support.
@ -102,6 +112,9 @@ Changed Functionality
- ``SSH::skip_processing_after_detection`` was removed. The functionality was - ``SSH::skip_processing_after_detection`` was removed. The functionality was
replaced by ``SSH::disable_analyzer_after_detection``. replaced by ``SSH::disable_analyzer_after_detection``.
- ``net_stats()`` and ``resource_usage()`` have been superseded by the
new execution statistics functions (see above).
- Some script-level identifier have changed their names: - Some script-level identifier have changed their names:
snaplen -> Pcap::snaplen snaplen -> Pcap::snaplen

View file

@ -1 +1 @@
2.4-498 2.4-569

@ -1 +1 @@
Subproject commit edbbe445d92cc6a5c2557661195f486b784769db Subproject commit 4179f9f00f4df21e4bcfece0323ec3468f688e8a

@ -1 +1 @@
Subproject commit cb771a3cf592d46643eea35d206b9f3e1a0758f7 Subproject commit 50d33db5d12b81187ea127a08903b444a3c4bd04

@ -1 +1 @@
Subproject commit 7df7878abfd864f9ae5609918c0f04f58b5f5e2d Subproject commit 9cce8be1a9c02b275f8a51d175e4729bdb0afee4

@ -1 +1 @@
Subproject commit ab61be0c4f128c976f72dfa5a09a87cd842f387a Subproject commit ebab672fa404b26944a6df6fbfb1aaab95ec5d48

View file

@ -14,6 +14,9 @@
/* We are on a Linux system */ /* We are on a Linux system */
#cmakedefine HAVE_LINUX #cmakedefine HAVE_LINUX
/* We are on a Mac OS X (Darwin) system */
#cmakedefine HAVE_DARWIN
/* Define if you have the `mallinfo' function. */ /* Define if you have the `mallinfo' function. */
#cmakedefine HAVE_MALLINFO #cmakedefine HAVE_MALLINFO

View file

@ -96,13 +96,13 @@ logging is done remotely to the manager, and normally very little is written
to disk. to disk.
The rule of thumb we have followed recently is to allocate approximately 1 The rule of thumb we have followed recently is to allocate approximately 1
core for every 80Mbps of traffic that is being analyzed. However, this core for every 250Mbps of traffic that is being analyzed. However, this
estimate could be extremely traffic mix-specific. It has generally worked estimate could be extremely traffic mix-specific. It has generally worked
for mixed traffic with many users and servers. For example, if your traffic for mixed traffic with many users and servers. For example, if your traffic
peaks around 2Gbps (combined) and you want to handle traffic at peak load, peaks around 2Gbps (combined) and you want to handle traffic at peak load,
you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps you may want to have 8 cores available (2048 / 250 == 8.2). If the 250Mbps
estimate works for your traffic, this could be handled by 3 physical hosts estimate works for your traffic, this could be handled by 2 physical hosts
dedicated to being workers with each one containing dual 6-core processors. dedicated to being workers with each one containing a quad-core processor.
Once a flow-based load balancer is put into place this model is extremely Once a flow-based load balancer is put into place this model is extremely
easy to scale. It is recommended that you estimate the amount of easy to scale. It is recommended that you estimate the amount of

View file

@ -0,0 +1 @@
../../../../aux/plugins/kafka/README

View file

@ -39,6 +39,8 @@ Network Protocols
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| rdp.log | RDP | :bro:type:`RDP::Info` | | rdp.log | RDP | :bro:type:`RDP::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| rfb.log | Remote Framebuffer (RFB) | :bro:type:`RFB::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| sip.log | SIP | :bro:type:`SIP::Info` | | sip.log | SIP | :bro:type:`SIP::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| smtp.log | SMTP transactions | :bro:type:`SMTP::Info` | | smtp.log | SMTP transactions | :bro:type:`SMTP::Info` |

View file

@ -277,16 +277,25 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: delete .. bro:keyword:: delete
The "delete" statement is used to remove an element from a The "delete" statement is used to remove an element from a
:bro:type:`set` or :bro:type:`table`. Nothing happens if the :bro:type:`set` or :bro:type:`table`, or to remove a value from
specified element does not exist in the set or table. a :bro:type:`record` field that has the :bro:attr:`&optional` attribute.
When attempting to remove an element from a set or table,
nothing happens if the specified index does not exist.
When attempting to remove a value from an "&optional" record field,
nothing happens if that field doesn't have a value.
Example:: Example::
local myset = set("this", "test"); local myset = set("this", "test");
local mytable = table(["key1"] = 80/tcp, ["key2"] = 53/udp); local mytable = table(["key1"] = 80/tcp, ["key2"] = 53/udp);
local myrec = MyRecordType($a = 1, $b = 2);
delete myset["test"]; delete myset["test"];
delete mytable["key1"]; delete mytable["key1"];
# In this example, "b" must have the "&optional" attribute
delete myrec$b;
.. bro:keyword:: event .. bro:keyword:: event
The "event" statement immediately queues invocation of an event handler. The "event" statement immediately queues invocation of an event handler.
@ -306,30 +315,33 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: for .. bro:keyword:: for
A "for" loop iterates over each element in a string, set, vector, or A "for" loop iterates over each element in a string, set, vector, or
table and executes a statement for each iteration. Currently, table and executes a statement for each iteration (note that the order
modifying a container's membership while iterating over it may in which the loop iterates over the elements in a set or a table is
result in undefined behavior, so avoid adding or removing elements nondeterministic). However, no loop iterations occur if the string,
inside the loop. set, vector, or table is empty.
For each iteration of the loop, a loop variable will be assigned to an For each iteration of the loop, a loop variable will be assigned to an
element if the expression evaluates to a string or set, or an index if element if the expression evaluates to a string or set, or an index if
the expression evaluates to a vector or table. Then the statement the expression evaluates to a vector or table. Then the statement
is executed. However, the statement will not be executed if the expression is executed.
evaluates to an object with no elements.
If the expression is a table or a set with more than one index, then the If the expression is a table or a set with more than one index, then the
loop variable must be specified as a comma-separated list of different loop variable must be specified as a comma-separated list of different
loop variables (one for each index), enclosed in brackets. loop variables (one for each index), enclosed in brackets.
A :bro:keyword:`break` statement can be used at any time to immediately
terminate the "for" loop, and a :bro:keyword:`next` statement can be
used to skip to the next loop iteration.
Note that the loop variable in a "for" statement is not allowed to be Note that the loop variable in a "for" statement is not allowed to be
a global variable, and it does not need to be declared prior to the "for" a global variable, and it does not need to be declared prior to the "for"
statement. The type will be inferred from the elements of the statement. The type will be inferred from the elements of the
expression. expression.
Currently, modifying a container's membership while iterating over it may
result in undefined behavior, so do not add or remove elements
inside the loop.
A :bro:keyword:`break` statement will immediately terminate the "for"
loop, and a :bro:keyword:`next` statement will skip to the next loop
iteration.
Example:: Example::
local myset = set(80/tcp, 81/tcp); local myset = set(80/tcp, 81/tcp);
@ -532,8 +544,6 @@ Here are the statements that the Bro scripting language supports.
end with either a :bro:keyword:`break`, :bro:keyword:`fallthrough`, or end with either a :bro:keyword:`break`, :bro:keyword:`fallthrough`, or
:bro:keyword:`return` statement (although "return" is allowed only :bro:keyword:`return` statement (although "return" is allowed only
if the "switch" statement is inside a function, hook, or event handler). if the "switch" statement is inside a function, hook, or event handler).
If a "case" (or "default") block contain more than one statement, then
there is no need to wrap them in braces.
Note that the braces in a "switch" statement are always required (these Note that the braces in a "switch" statement are always required (these
do not indicate the presence of a `compound statement`_), and that no do not indicate the presence of a `compound statement`_), and that no
@ -604,12 +614,9 @@ Here are the statements that the Bro scripting language supports.
if ( skip_ahead() ) if ( skip_ahead() )
next; next;
[...]
if ( finish_up ) if ( finish_up )
break; break;
[...]
} }
.. _compound statement: .. _compound statement:

View file

@ -0,0 +1,25 @@
module Conn;
export {
## The record type which contains column fields of the connection log.
type Info: record {
ts: time &log;
uid: string &log;
id: conn_id &log;
proto: transport_proto &log;
service: string &log &optional;
duration: interval &log &optional;
orig_bytes: count &log &optional;
resp_bytes: count &log &optional;
conn_state: string &log &optional;
local_orig: bool &log &optional;
local_resp: bool &log &optional;
missed_bytes: count &log &default=0;
history: string &log &optional;
orig_pkts: count &log &optional;
orig_ip_bytes: count &log &optional;
resp_pkts: count &log &optional;
resp_ip_bytes: count &log &optional;
tunnel_parents: set[string] &log;
};
}

View file

@ -0,0 +1,7 @@
module HTTP;
export {
## This setting changes if passwords used in Basic-Auth are captured or
## not.
const default_capture_password = F &redef;
}

View file

@ -362,8 +362,7 @@ decrypted from HTTP streams is stored in
:bro:see:`HTTP::default_capture_password` as shown in the stripped down :bro:see:`HTTP::default_capture_password` as shown in the stripped down
excerpt from :doc:`/scripts/base/protocols/http/main.bro` below. excerpt from :doc:`/scripts/base/protocols/http/main.bro` below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro .. btest-include:: ${DOC_ROOT}/scripting/http_main.bro
:lines: 9-11,20-22,125
Because the constant was declared with the ``&redef`` attribute, if we Because the constant was declared with the ``&redef`` attribute, if we
needed to turn this option on globally, we could do so by adding the needed to turn this option on globally, we could do so by adding the
@ -825,8 +824,7 @@ example of the ``record`` data type in the earlier sections, the
:bro:type:`Conn::Info`, which corresponds to the fields logged into :bro:type:`Conn::Info`, which corresponds to the fields logged into
``conn.log``, is shown by the excerpt below. ``conn.log``, is shown by the excerpt below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro .. btest-include:: ${DOC_ROOT}/scripting/data_type_record.bro
:lines: 10-12,16-17,19,21,23,25,28,31,35,38,57,63,69,75,98,101,105,108,112,116-117,122
Looking at the structure of the definition, a new collection of data Looking at the structure of the definition, a new collection of data
types is being defined as a type called ``Info``. Since this type types is being defined as a type called ``Info``. Since this type

View file

@ -6,6 +6,7 @@ module X509;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the X.509 log.
type Info: record { type Info: record {
## Current timestamp. ## Current timestamp.
ts: time &log; ts: time &log;

View file

@ -270,6 +270,8 @@ export {
module Broker; module Broker;
@ifdef ( Broker::__enable )
function enable(flags: EndpointFlags &default = EndpointFlags()) : bool function enable(flags: EndpointFlags &default = EndpointFlags()) : bool
{ {
return __enable(flags); return __enable(flags);
@ -370,3 +372,4 @@ function unsubscribe_to_logs(topic_prefix: string): bool
return __unsubscribe_to_logs(topic_prefix); return __unsubscribe_to_logs(topic_prefix);
} }
@endif

View file

@ -57,6 +57,8 @@ export {
rocksdb: RocksDBOptions &default = RocksDBOptions(); rocksdb: RocksDBOptions &default = RocksDBOptions();
}; };
@ifdef ( Broker::__enable )
## Create a master data store which contains key-value pairs. ## Create a master data store which contains key-value pairs.
## ##
## id: a unique name for the data store. ## id: a unique name for the data store.
@ -720,12 +722,16 @@ export {
## ##
## Returns: element in the collection that the iterator currently references. ## Returns: element in the collection that the iterator currently references.
global record_iterator_value: function(it: opaque of Broker::RecordIterator): Broker::Data; global record_iterator_value: function(it: opaque of Broker::RecordIterator): Broker::Data;
@endif
} }
@load base/bif/store.bif @load base/bif/store.bif
module Broker; module Broker;
@ifdef ( Broker::__enable )
function create_master(id: string, b: BackendType &default = MEMORY, function create_master(id: string, b: BackendType &default = MEMORY,
options: BackendOptions &default = BackendOptions()): opaque of Broker::Handle options: BackendOptions &default = BackendOptions()): opaque of Broker::Handle
{ {
@ -1095,3 +1101,5 @@ function record_iterator_value(it: opaque of Broker::RecordIterator): Broker::Da
{ {
return __record_iterator_value(it); return __record_iterator_value(it);
} }
@endif

View file

@ -68,7 +68,7 @@ export {
## Events raised by TimeMachine instances and handled by workers. ## Events raised by TimeMachine instances and handled by workers.
const tm2worker_events = /EMPTY/ &redef; const tm2worker_events = /EMPTY/ &redef;
## Events sent by the control host (i.e. BroControl) when dynamically ## Events sent by the control host (i.e., BroControl) when dynamically
## connecting to a running instance to update settings or request data. ## connecting to a running instance to update settings or request data.
const control_events = Control::controller_events &redef; const control_events = Control::controller_events &redef;

View file

@ -23,20 +23,20 @@ export {
# ### Generic functions and events. # ### Generic functions and events.
# ### # ###
# Activates a plugin. ## Activates a plugin.
# ##
# p: The plugin to acticate. ## p: The plugin to acticate.
# ##
# priority: The higher the priority, the earlier this plugin will be checked ## priority: The higher the priority, the earlier this plugin will be checked
# whether it supports an operation, relative to other plugins. ## whether it supports an operation, relative to other plugins.
global activate: function(p: PluginState, priority: int); global activate: function(p: PluginState, priority: int);
# Event that is used to initialize plugins. Place all plugin initialization ## Event that is used to initialize plugins. Place all plugin initialization
# related functionality in this event. ## related functionality in this event.
global NetControl::init: event(); global NetControl::init: event();
# Event that is raised once all plugins activated in ``NetControl::init`` have finished ## Event that is raised once all plugins activated in ``NetControl::init``
# their initialization. ## have finished their initialization.
global NetControl::init_done: event(); global NetControl::init_done: event();
# ### # ###
@ -109,21 +109,24 @@ export {
## ##
## r: The rule to install. ## r: The rule to install.
## ##
## Returns: If succesful, returns an ID string unique to the rule that can later ## Returns: If succesful, returns an ID string unique to the rule that can
## be used to refer to it. If unsuccessful, returns an empty string. The ID is also ## later be used to refer to it. If unsuccessful, returns an empty
## assigned to ``r$id``. Note that "successful" means "a plugin knew how to handle ## string. The ID is also assigned to ``r$id``. Note that
## the rule", it doesn't necessarily mean that it was indeed successfully put in ## "successful" means "a plugin knew how to handle the rule", it
## place, because that might happen asynchronously and thus fail only later. ## doesn't necessarily mean that it was indeed successfully put in
## place, because that might happen asynchronously and thus fail
## only later.
global add_rule: function(r: Rule) : string; global add_rule: function(r: Rule) : string;
## Removes a rule. ## Removes a rule.
## ##
## id: The rule to remove, specified as the ID returned by :bro:id:`add_rule` . ## id: The rule to remove, specified as the ID returned by :bro:id:`NetControl::add_rule`.
## ##
## Returns: True if succesful, the relevant plugin indicated that it knew how ## Returns: True if succesful, the relevant plugin indicated that it knew
## to handle the removal. Note that again "success" means the plugin accepted the ## how to handle the removal. Note that again "success" means the
## removal. They might still fail to put it into effect, as that might happen ## plugin accepted the removal. They might still fail to put it
## asynchronously and thus go wrong at that point. ## into effect, as that might happen asynchronously and thus go
## wrong at that point.
global remove_rule: function(id: string) : bool; global remove_rule: function(id: string) : bool;
## Deletes a rule without removing in from the backends to which it has been ## Deletes a rule without removing in from the backends to which it has been
@ -180,7 +183,7 @@ export {
## r: The rule now removed. ## r: The rule now removed.
## ##
## p: The state for the plugin that had the rule in place and now ## p: The state for the plugin that had the rule in place and now
## removed it. ## removed it.
## ##
## msg: An optional informational message by the plugin. ## msg: An optional informational message by the plugin.
global rule_removed: event(r: Rule, p: PluginState, msg: string &default=""); global rule_removed: event(r: Rule, p: PluginState, msg: string &default="");
@ -192,7 +195,7 @@ export {
## i: Additional flow information, if supported by the protocol. ## i: Additional flow information, if supported by the protocol.
## ##
## p: The state for the plugin that had the rule in place and now ## p: The state for the plugin that had the rule in place and now
## removed it. ## removed it.
## ##
## msg: An optional informational message by the plugin. ## msg: An optional informational message by the plugin.
global rule_timeout: event(r: Rule, i: FlowInfo, p: PluginState); global rule_timeout: event(r: Rule, i: FlowInfo, p: PluginState);

View file

@ -6,6 +6,8 @@ module NetControl;
@load ../plugin @load ../plugin
@load base/frameworks/broker @load base/frameworks/broker
@ifdef ( Broker::__enable )
export { export {
type AclRule : record { type AclRule : record {
command: string; command: string;
@ -306,3 +308,4 @@ function create_acld(config: AcldConfig) : PluginState
return p; return p;
} }
@endif

View file

@ -8,6 +8,8 @@ module NetControl;
@load ../plugin @load ../plugin
@load base/frameworks/broker @load base/frameworks/broker
@ifdef ( Broker::__enable )
export { export {
type BrokerConfig: record { type BrokerConfig: record {
## The broker topic used to send events to ## The broker topic used to send events to
@ -215,3 +217,5 @@ function create_broker(config: BrokerConfig, can_expire: bool) : PluginState
return p; return p;
} }
@endif

View file

@ -11,7 +11,7 @@ export {
## plugin simply logs the operations it receives. ## plugin simply logs the operations it receives.
## ##
## do_something: If true, the plugin will claim it supports all operations; if ## do_something: If true, the plugin will claim it supports all operations; if
## false, it will indicate it doesn't support any. ## false, it will indicate it doesn't support any.
global create_debug: function(do_something: bool) : PluginState; global create_debug: function(do_something: bool) : PluginState;
} }

View file

@ -14,7 +14,7 @@ export {
MAC, ##< Activity involving a MAC address. MAC, ##< Activity involving a MAC address.
}; };
## Type of a :bro:id:`Flow` for defining a flow. ## Type for defining a flow.
type Flow: record { type Flow: record {
src_h: subnet &optional; ##< The source IP address/subnet. src_h: subnet &optional; ##< The source IP address/subnet.
src_p: port &optional; ##< The source port number. src_p: port &optional; ##< The source port number.
@ -27,10 +27,10 @@ export {
## Type defining the enity an :bro:id:`Rule` is operating on. ## Type defining the enity an :bro:id:`Rule` is operating on.
type Entity: record { type Entity: record {
ty: EntityType; ##< Type of entity. ty: EntityType; ##< Type of entity.
conn: conn_id &optional; ##< Used with :bro:id:`CONNECTION` . conn: conn_id &optional; ##< Used with :bro:enum:`NetControl::CONNECTION`.
flow: Flow &optional; ##< Used with :bro:id:`FLOW` . flow: Flow &optional; ##< Used with :bro:enum:`NetControl::FLOW`.
ip: subnet &optional; ##< Used with bro:id:`ADDRESS`; can specifiy a CIDR subnet. ip: subnet &optional; ##< Used with :bro:enum:`NetControl::ADDRESS` to specifiy a CIDR subnet.
mac: string &optional; ##< Used with :bro:id:`MAC`. mac: string &optional; ##< Used with :bro:enum:`NetControl::MAC`.
}; };
## Target of :bro:id:`Rule` action. ## Target of :bro:id:`Rule` action.
@ -68,7 +68,7 @@ export {
WHITELIST, WHITELIST,
}; };
## Type of a :bro:id:`FlowMod` for defining a flow modification action. ## Type for defining a flow modification action.
type FlowMod: record { type FlowMod: record {
src_h: addr &optional; ##< The source IP address. src_h: addr &optional; ##< The source IP address.
src_p: count &optional; ##< The source port number. src_p: count &optional; ##< The source port number.
@ -90,8 +90,8 @@ export {
priority: int &default=default_priority; ##< Priority if multiple rules match an entity (larger value is higher priority). priority: int &default=default_priority; ##< Priority if multiple rules match an entity (larger value is higher priority).
location: string &optional; ##< Optional string describing where/what installed the rule. location: string &optional; ##< Optional string describing where/what installed the rule.
out_port: count &optional; ##< Argument for bro:id:`REDIRECT` rules. out_port: count &optional; ##< Argument for :bro:enum:`NetControl::REDIRECT` rules.
mod: FlowMod &optional; ##< Argument for :bro:id:`MODIFY` rules. mod: FlowMod &optional; ##< Argument for :bro:enum:`NetControl::MODIFY` rules.
id: string &default=""; ##< Internally determined unique ID for this rule. Will be set when added. id: string &default=""; ##< Internally determined unique ID for this rule. Will be set when added.
cid: count &default=0; ##< Internally determined unique numeric ID for this rule. Set when added. cid: count &default=0; ##< Internally determined unique numeric ID for this rule. Set when added.

View file

@ -44,6 +44,7 @@ export {
ACTION_ALARM, ACTION_ALARM,
}; };
## Type that represents a set of actions.
type ActionSet: set[Notice::Action]; type ActionSet: set[Notice::Action];
## The notice framework is able to do automatic notice suppression by ## The notice framework is able to do automatic notice suppression by
@ -52,6 +53,7 @@ export {
## suppression. ## suppression.
const default_suppression_interval = 1hrs &redef; const default_suppression_interval = 1hrs &redef;
## The record type that is used for representing and logging notices.
type Info: record { type Info: record {
## An absolute time indicating when the notice occurred, ## An absolute time indicating when the notice occurred,
## defaults to the current network time. ## defaults to the current network time.

View file

@ -5,6 +5,8 @@
module OpenFlow; module OpenFlow;
@ifdef ( Broker::__enable )
export { export {
redef enum Plugin += { redef enum Plugin += {
BROKER, BROKER,
@ -93,3 +95,4 @@ function broker_new(name: string, host: addr, host_port: port, topic: string, dp
return c; return c;
} }
@endif

View file

@ -18,7 +18,7 @@ export {
event net_stats_update(last_stat: NetStats) event net_stats_update(last_stat: NetStats)
{ {
local ns = net_stats(); local ns = get_net_stats();
local new_dropped = ns$pkts_dropped - last_stat$pkts_dropped; local new_dropped = ns$pkts_dropped - last_stat$pkts_dropped;
if ( new_dropped > 0 ) if ( new_dropped > 0 )
{ {
@ -38,5 +38,5 @@ event bro_init()
# Since this currently only calculates packet drops, let's skip the stats # Since this currently only calculates packet drops, let's skip the stats
# collection if reading traces. # collection if reading traces.
if ( ! reading_traces() ) if ( ! reading_traces() )
schedule stats_collection_interval { net_stats_update(net_stats()) }; schedule stats_collection_interval { net_stats_update(get_net_stats()) };
} }

View file

@ -5,7 +5,8 @@
module SumStats; module SumStats;
export { export {
## The various calculations are all defined as plugins. ## Type to represent the calculations that are available. The calculations
## are all defined as plugins.
type Calculation: enum { type Calculation: enum {
PLACEHOLDER PLACEHOLDER
}; };
@ -39,6 +40,7 @@ export {
str: string &optional; str: string &optional;
}; };
## Represents a reducer.
type Reducer: record { type Reducer: record {
## Observation stream identifier for the reducer ## Observation stream identifier for the reducer
## to attach to. ## to attach to.
@ -56,7 +58,7 @@ export {
normalize_key: function(key: SumStats::Key): Key &optional; normalize_key: function(key: SumStats::Key): Key &optional;
}; };
## Value calculated for an observation stream fed into a reducer. ## Result calculated for an observation stream fed into a reducer.
## Most of the fields are added by plugins. ## Most of the fields are added by plugins.
type ResultVal: record { type ResultVal: record {
## The time when the first observation was added to ## The time when the first observation was added to
@ -71,14 +73,15 @@ export {
num: count &default=0; num: count &default=0;
}; };
## Type to store results for multiple reducers. ## Type to store a table of results for multiple reducers indexed by
## observation stream identifier.
type Result: table[string] of ResultVal; type Result: table[string] of ResultVal;
## Type to store a table of sumstats results indexed by keys. ## Type to store a table of sumstats results indexed by keys.
type ResultTable: table[Key] of Result; type ResultTable: table[Key] of Result;
## SumStats represent an aggregation of reducers along with ## Represents a SumStat, which consists of an aggregation of reducers along
## mechanisms to handle various situations like the epoch ending ## with mechanisms to handle various situations like the epoch ending
## or thresholds being crossed. ## or thresholds being crossed.
## ##
## It's best to not access any global state outside ## It's best to not access any global state outside
@ -101,21 +104,28 @@ export {
## The reducers for the SumStat. ## The reducers for the SumStat.
reducers: set[Reducer]; reducers: set[Reducer];
## Provide a function to calculate a value from the ## A function that will be called once for each observation in order
## :bro:see:`SumStats::Result` structure which will be used ## to calculate a value from the :bro:see:`SumStats::Result` structure
## for thresholding. ## which will be used for thresholding.
## This is required if a *threshold* value is given. ## This function is required if a *threshold* value or
## a *threshold_series* is given.
threshold_val: function(key: SumStats::Key, result: SumStats::Result): double &optional; threshold_val: function(key: SumStats::Key, result: SumStats::Result): double &optional;
## The threshold value for calling the ## The threshold value for calling the *threshold_crossed* callback.
## *threshold_crossed* callback. ## If you need more than one threshold value, then use
## *threshold_series* instead.
threshold: double &optional; threshold: double &optional;
## A series of thresholds for calling the ## A series of thresholds for calling the *threshold_crossed*
## *threshold_crossed* callback. ## callback. These thresholds must be listed in ascending order,
## because a threshold is not checked until the preceding one has
## been crossed.
threshold_series: vector of double &optional; threshold_series: vector of double &optional;
## A callback that is called when a threshold is crossed. ## A callback that is called when a threshold is crossed.
## A threshold is crossed when the value returned from *threshold_val*
## is greater than or equal to the threshold value, but only the first
## time this happens within an epoch.
threshold_crossed: function(key: SumStats::Key, result: SumStats::Result) &optional; threshold_crossed: function(key: SumStats::Key, result: SumStats::Result) &optional;
## A callback that receives each of the results at the ## A callback that receives each of the results at the
@ -130,6 +140,8 @@ export {
}; };
## Create a summary statistic. ## Create a summary statistic.
##
## ss: The SumStat to create.
global create: function(ss: SumStats::SumStat); global create: function(ss: SumStats::SumStat);
## Add data into an observation stream. This should be ## Add data into an observation stream. This should be

View file

@ -1,3 +1,5 @@
##! Calculate the average.
@load ../main @load ../main
module SumStats; module SumStats;
@ -9,7 +11,7 @@ export {
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this calculates the average of all values. ## For numeric data, this is the average of all values.
average: double &optional; average: double &optional;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Calculate the number of unique values (using the HyperLogLog algorithm).
@load base/frameworks/sumstats @load base/frameworks/sumstats
module SumStats; module SumStats;

View file

@ -1,3 +1,5 @@
##! Keep the last X observations.
@load base/frameworks/sumstats @load base/frameworks/sumstats
@load base/utils/queue @load base/utils/queue

View file

@ -1,3 +1,5 @@
##! Find the maximum value.
@load ../main @load ../main
module SumStats; module SumStats;
@ -9,7 +11,7 @@ export {
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this tracks the maximum value given. ## For numeric data, this tracks the maximum value.
max: double &optional; max: double &optional;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Find the minimum value.
@load ../main @load ../main
module SumStats; module SumStats;
@ -9,7 +11,7 @@ export {
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this tracks the minimum value given. ## For numeric data, this tracks the minimum value.
min: double &optional; min: double &optional;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Keep a random sample of values.
@load base/frameworks/sumstats/main @load base/frameworks/sumstats/main
module SumStats; module SumStats;
@ -10,7 +12,7 @@ export {
}; };
redef record Reducer += { redef record Reducer += {
## A number of sample Observations to collect. ## The number of sample Observations to collect.
num_samples: count &default=0; num_samples: count &default=0;
}; };

View file

@ -1,3 +1,5 @@
##! Calculate the standard deviation.
@load ./variance @load ./variance
@load ../main @load ../main
@ -5,7 +7,7 @@ module SumStats;
export { export {
redef enum Calculation += { redef enum Calculation += {
## Find the standard deviation of the values. ## Calculate the standard deviation of the values.
STD_DEV STD_DEV
}; };

View file

@ -1,11 +1,13 @@
##! Calculate the sum.
@load ../main @load ../main
module SumStats; module SumStats;
export { export {
redef enum Calculation += { redef enum Calculation += {
## Sums the values given. For string values, ## Calculate the sum of the values. For string values,
## this will be the number of strings given. ## this will be the number of strings.
SUM SUM
}; };

View file

@ -1,3 +1,5 @@
##! Keep the top-k (i.e., most frequently occurring) observations.
@load base/frameworks/sumstats @load base/frameworks/sumstats
module SumStats; module SumStats;
@ -9,10 +11,13 @@ export {
}; };
redef enum Calculation += { redef enum Calculation += {
## Keep a top-k list of values.
TOPK TOPK
}; };
redef record ResultVal += { redef record ResultVal += {
## A handle which can be passed to some built-in functions to get
## the top-k results.
topk: opaque of topk &optional; topk: opaque of topk &optional;
}; };

View file

@ -1,10 +1,12 @@
##! Calculate the number of unique values.
@load ../main @load ../main
module SumStats; module SumStats;
export { export {
redef record Reducer += { redef record Reducer += {
## Maximum number of unique elements to store. ## Maximum number of unique values to store.
unique_max: count &optional; unique_max: count &optional;
}; };
@ -15,7 +17,7 @@ export {
redef record ResultVal += { redef record ResultVal += {
## If cardinality is being tracked, the number of unique ## If cardinality is being tracked, the number of unique
## items is tracked here. ## values is tracked here.
unique: count &default=0; unique: count &default=0;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Calculate the variance.
@load ./average @load ./average
@load ../main @load ../main
@ -5,12 +7,12 @@ module SumStats;
export { export {
redef enum Calculation += { redef enum Calculation += {
## Find the variance of the values. ## Calculate the variance of the values.
VARIANCE VARIANCE
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this calculates the variance. ## For numeric data, this is the variance.
variance: double &optional; variance: double &optional;
}; };
} }

View file

@ -474,64 +474,127 @@ type NetStats: record {
bytes_recvd: count &default=0; ##< Bytes received by Bro. bytes_recvd: count &default=0; ##< Bytes received by Bro.
}; };
## Statistics about Bro's resource consumption. type ConnStats: record {
total_conns: count; ##<
current_conns: count; ##<
current_conns_extern: count; ##<
sess_current_conns: count; ##<
num_packets: count;
num_fragments: count;
max_fragments: count;
num_tcp_conns: count; ##< Current number of TCP connections in memory.
max_tcp_conns: count; ##< Maximum number of concurrent TCP connections so far.
cumulative_tcp_conns: count; ##< Total number of TCP connections so far.
num_udp_conns: count; ##< Current number of UDP flows in memory.
max_udp_conns: count; ##< Maximum number of concurrent UDP flows so far.
cumulative_udp_conns: count; ##< Total number of UDP flows so far.
num_icmp_conns: count; ##< Current number of ICMP flows in memory.
max_icmp_conns: count; ##< Maximum number of concurrent ICMP flows so far.
cumulative_icmp_conns: count; ##< Total number of ICMP flows so far.
killed_by_inactivity: count;
};
## Statistics about Bro's process.
## ##
## .. bro:see:: resource_usage ## .. bro:see:: get_proc_stats
## ##
## .. note:: All process-level values refer to Bro's main process only, not to ## .. note:: All process-level values refer to Bro's main process only, not to
## the child process it spawns for doing communication. ## the child process it spawns for doing communication.
type bro_resources: record { type ProcStats: record {
version: string; ##< Bro version string. debug: bool; ##< True if compiled with --enable-debug.
debug: bool; ##< True if compiled with --enable-debug. start_time: time; ##< Start time of process.
start_time: time; ##< Start time of process. real_time: interval; ##< Elapsed real time since Bro started running.
real_time: interval; ##< Elapsed real time since Bro started running. user_time: interval; ##< User CPU seconds.
user_time: interval; ##< User CPU seconds. system_time: interval; ##< System CPU seconds.
system_time: interval; ##< System CPU seconds. mem: count; ##< Maximum memory consumed, in KB.
mem: count; ##< Maximum memory consumed, in KB. minor_faults: count; ##< Page faults not requiring actual I/O.
minor_faults: count; ##< Page faults not requiring actual I/O. major_faults: count; ##< Page faults requiring actual I/O.
major_faults: count; ##< Page faults requiring actual I/O. num_swap: count; ##< Times swapped out.
num_swap: count; ##< Times swapped out. blocking_input: count; ##< Blocking input operations.
blocking_input: count; ##< Blocking input operations. blocking_output: count; ##< Blocking output operations.
blocking_output: count; ##< Blocking output operations. num_context: count; ##< Number of involuntary context switches.
num_context: count; ##< Number of involuntary context switches. };
num_TCP_conns: count; ##< Current number of TCP connections in memory. type EventStats: record {
num_UDP_conns: count; ##< Current number of UDP flows in memory. queued: count; ##< Total number of events queued so far.
num_ICMP_conns: count; ##< Current number of ICMP flows in memory. dispatched: count; ##< Total number of events dispatched so far.
num_fragments: count; ##< Current number of fragments pending reassembly.
num_packets: count; ##< Total number of packets processed to date.
num_timers: count; ##< Current number of pending timers.
num_events_queued: count; ##< Total number of events queued so far.
num_events_dispatched: count; ##< Total number of events dispatched so far.
max_TCP_conns: count; ##< Maximum number of concurrent TCP connections so far.
max_UDP_conns: count; ##< Maximum number of concurrent UDP connections so far.
max_ICMP_conns: count; ##< Maximum number of concurrent ICMP connections so far.
max_fragments: count; ##< Maximum number of concurrently buffered fragments so far.
max_timers: count; ##< Maximum number of concurrent timers pending so far.
}; };
## Summary statistics of all regular expression matchers. ## Summary statistics of all regular expression matchers.
## ##
## .. bro:see:: get_reassembler_stats
type ReassemblerStats: record {
file_size: count; ##< Byte size of File reassembly tracking.
frag_size: count; ##< Byte size of Fragment reassembly tracking.
tcp_size: count; ##< Byte size of TCP reassembly tracking.
unknown_size: count; ##< Byte size of reassembly tracking for unknown purposes.
};
## Statistics of all regular expression matchers.
##
## .. bro:see:: get_matcher_stats ## .. bro:see:: get_matcher_stats
type matcher_stats: record { type MatcherStats: record {
matchers: count; ##< Number of distinct RE matchers. matchers: count; ##< Number of distinct RE matchers.
dfa_states: count; ##< Number of DFA states across all matchers. nfa_states: count; ##< Number of NFA states across all matchers.
computed: count; ##< Number of computed DFA state transitions. dfa_states: count; ##< Number of DFA states across all matchers.
mem: count; ##< Number of bytes used by DFA states. computed: count; ##< Number of computed DFA state transitions.
hits: count; ##< Number of cache hits. mem: count; ##< Number of bytes used by DFA states.
misses: count; ##< Number of cache misses. hits: count; ##< Number of cache hits.
avg_nfa_states: count; ##< Average number of NFA states across all matchers. misses: count; ##< Number of cache misses.
};
## Statistics of timers.
##
## .. bro:see:: get_timer_stats
type TimerStats: record {
current: count; ##< Current number of pending timers.
max: count; ##< Maximum number of concurrent timers pending so far.
cumulative: count; ##< Cumulative number of timers scheduled.
};
## Statistics of file analysis.
##
## .. bro:see:: get_file_analysis_stats
type FileAnalysisStats: record {
current: count; ##< Current number of files being analyzed.
max: count; ##< Maximum number of concurrent files so far.
cumulative: count; ##< Cumulative number of files analyzed.
};
## Statistics related to Bro's active use of DNS. These numbers are
## about Bro performing DNS queries on it's own, not traffic
## being seen.
##
## .. bro:see:: get_dns_stats
type DNSStats: record {
requests: count; ##< Number of DNS requests made
successful: count; ##< Number of successful DNS replies.
failed: count; ##< Number of DNS reply failures.
pending: count; ##< Current pending queries.
cached_hosts: count; ##< Number of cached hosts.
cached_addresses: count; ##< Number of cached addresses.
}; };
## Statistics about number of gaps in TCP connections. ## Statistics about number of gaps in TCP connections.
## ##
## .. bro:see:: gap_report get_gap_summary ## .. bro:see:: get_gap_stats
type gap_info: record { type GapStats: record {
ack_events: count; ##< How many ack events *could* have had gaps. ack_events: count; ##< How many ack events *could* have had gaps.
ack_bytes: count; ##< How many bytes those covered. ack_bytes: count; ##< How many bytes those covered.
gap_events: count; ##< How many *did* have gaps. gap_events: count; ##< How many *did* have gaps.
gap_bytes: count; ##< How many bytes were missing in the gaps. gap_bytes: count; ##< How many bytes were missing in the gaps.
};
## Statistics about threads.
##
## .. bro:see:: get_thread_stats
type ThreadStats: record {
num_threads: count;
}; };
## Deprecated. ## Deprecated.
@ -3435,23 +3498,17 @@ global pkt_profile_file: file &redef;
## .. bro:see:: load_sample ## .. bro:see:: load_sample
global load_sample_freq = 20 &redef; global load_sample_freq = 20 &redef;
## Rate at which to generate :bro:see:`gap_report` events assessing to what
## degree the measurement process appears to exhibit loss.
##
## .. bro:see:: gap_report
const gap_report_freq = 1.0 sec &redef;
## Whether to attempt to automatically detect SYN/FIN/RST-filtered trace ## Whether to attempt to automatically detect SYN/FIN/RST-filtered trace
## and not report missing segments for such connections. ## and not report missing segments for such connections.
## If this is enabled, then missing data at the end of connections may not ## If this is enabled, then missing data at the end of connections may not
## be reported via :bro:see:`content_gap`. ## be reported via :bro:see:`content_gap`.
const detect_filtered_trace = F &redef; const detect_filtered_trace = F &redef;
## Whether we want :bro:see:`content_gap` and :bro:see:`gap_report` for partial ## Whether we want :bro:see:`content_gap` and :bro:see:`get_gap_summary` for partial
## connections. A connection is partial if it is missing a full handshake. Note ## connections. A connection is partial if it is missing a full handshake. Note
## that gap reports for partial connections might not be reliable. ## that gap reports for partial connections might not be reliable.
## ##
## .. bro:see:: content_gap gap_report partial_connection ## .. bro:see:: content_gap get_gap_summary partial_connection
const report_gaps_for_partial = F &redef; const report_gaps_for_partial = F &redef;
## Flag to prevent Bro from exiting automatically when input is exhausted. ## Flag to prevent Bro from exiting automatically when input is exhausted.

View file

@ -37,10 +37,8 @@
@load base/frameworks/reporter @load base/frameworks/reporter
@load base/frameworks/sumstats @load base/frameworks/sumstats
@load base/frameworks/tunnels @load base/frameworks/tunnels
@ifdef ( Broker::enable )
@load base/frameworks/openflow @load base/frameworks/openflow
@load base/frameworks/netcontrol @load base/frameworks/netcontrol
@endif
@load base/protocols/conn @load base/protocols/conn
@load base/protocols/dhcp @load base/protocols/dhcp
@ -65,6 +63,7 @@
@load base/protocols/ssl @load base/protocols/ssl
@load base/protocols/syslog @load base/protocols/syslog
@load base/protocols/tunnels @load base/protocols/tunnels
@load base/protocols/xmpp
@load base/files/pe @load base/files/pe
@load base/files/hash @load base/files/hash

View file

@ -26,7 +26,7 @@ event ChecksumOffloading::check()
if ( done ) if ( done )
return; return;
local pkts_recvd = net_stats()$pkts_recvd; local pkts_recvd = get_net_stats()$pkts_recvd;
local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0; local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0;
local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0; local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0;
local bad_udp_checksum_pct = (pkts_recvd != 0) ? (bad_udp_checksums*1.0 / pkts_recvd*1.0) : 0; local bad_udp_checksum_pct = (pkts_recvd != 0) ? (bad_udp_checksums*1.0 / pkts_recvd*1.0) : 0;

View file

@ -52,7 +52,7 @@ export {
## The Recursion Available bit in a response message indicates ## The Recursion Available bit in a response message indicates
## that the name server supports recursive queries. ## that the name server supports recursive queries.
RA: bool &log &default=F; RA: bool &log &default=F;
## A reserved field that is currently supposed to be zero in all ## A reserved field that is usually zero in
## queries and responses. ## queries and responses.
Z: count &log &default=0; Z: count &log &default=0;
## The set of resource descriptions in the query answer. ## The set of resource descriptions in the query answer.

View file

@ -21,6 +21,7 @@ export {
## not. ## not.
const default_capture_password = F &redef; const default_capture_password = F &redef;
## The record type which contains the fields of the HTTP log.
type Info: record { type Info: record {
## Timestamp for when the request happened. ## Timestamp for when the request happened.
ts: time &log; ts: time &log;

View file

@ -3,6 +3,7 @@ module RFB;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the RFB log.
type Info: record { type Info: record {
## Timestamp for when the event happened. ## Timestamp for when the event happened.
ts: time &log; ts: time &log;

View file

@ -10,6 +10,7 @@ module SIP;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SIP log.
type Info: record { type Info: record {
## Timestamp for when the request happened. ## Timestamp for when the request happened.
ts: time &log; ts: time &log;

View file

@ -7,6 +7,7 @@ module SMTP;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SMTP log.
type Info: record { type Info: record {
## Time when the message was first seen. ## Time when the message was first seen.
ts: time &log; ts: time &log;

View file

@ -6,6 +6,7 @@ module SOCKS;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SOCKS log.
type Info: record { type Info: record {
## Time when the proxy connection was first detected. ## Time when the proxy connection was first detected.
ts: time &log; ts: time &log;

View file

@ -8,6 +8,7 @@ export {
## The SSH protocol logging stream identifier. ## The SSH protocol logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SSH log.
type Info: record { type Info: record {
## Time when the SSH connection began. ## Time when the SSH connection began.
ts: time &log; ts: time &log;

View file

@ -8,6 +8,7 @@ module SSL;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SSL log.
type Info: record { type Info: record {
## Time when the SSL connection was first detected. ## Time when the SSL connection was first detected.
ts: time &log; ts: time &log;

View file

@ -7,7 +7,8 @@ module Syslog;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the syslog log.
type Info: record { type Info: record {
## Timestamp when the syslog message was seen. ## Timestamp when the syslog message was seen.
ts: time &log; ts: time &log;

View file

@ -0,0 +1,5 @@
Support for the Extensible Messaging and Presence Protocol (XMPP).
Note that currently the XMPP analyzer only supports analyzing XMPP sessions
until they do or do not switch to TLS using StartTLS. Hence, we do not get
actual chat information from XMPP sessions, only X509 certificates.

View file

@ -0,0 +1,3 @@
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,5 @@
signature dpd_xmpp {
ip-proto == tcp
payload /^(<\?xml[^?>]*\?>)?[\n\r ]*<stream:stream [^>]*xmlns='jabber:/
enable "xmpp"
}

View file

@ -0,0 +1,11 @@
module XMPP;
const ports = { 5222/tcp, 5269/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_XMPP, ports);
}

View file

@ -28,13 +28,9 @@ event Control::peer_status_request()
local peer = Communication::nodes[p]; local peer = Communication::nodes[p];
if ( ! peer$connected ) if ( ! peer$connected )
next; next;
local res = resource_usage(); status += fmt("%.6f peer=%s host=%s\n",
status += fmt("%.6f peer=%s host=%s events_in=%s events_out=%s ops_in=%s ops_out=%s bytes_in=? bytes_out=?\n", network_time(), peer$peer$descr, peer$host);
network_time(),
peer$peer$descr, peer$host,
res$num_events_queued, res$num_events_dispatched,
res$blocking_input, res$blocking_output);
} }
event Control::peer_status_response(status); event Control::peer_status_response(status);
@ -42,24 +38,24 @@ event Control::peer_status_request()
event Control::net_stats_request() event Control::net_stats_request()
{ {
local ns = net_stats(); local ns = get_net_stats();
local reply = fmt("%.6f recvd=%d dropped=%d link=%d\n", network_time(), local reply = fmt("%.6f recvd=%d dropped=%d link=%d\n", network_time(),
ns$pkts_recvd, ns$pkts_dropped, ns$pkts_link); ns$pkts_recvd, ns$pkts_dropped, ns$pkts_link);
event Control::net_stats_response(reply); event Control::net_stats_response(reply);
} }
event Control::configuration_update_request() event Control::configuration_update_request()
{ {
# Generate the alias event. # Generate the alias event.
event Control::configuration_update(); event Control::configuration_update();
# Don't need to do anything in particular here, it's just indicating that # Don't need to do anything in particular here, it's just indicating that
# the configuration is going to be updated. This event could be handled # the configuration is going to be updated. This event could be handled
# by other scripts if they need to do some ancilliary processing if # by other scripts if they need to do some ancilliary processing if
# redef-able consts are modified at runtime. # redef-able consts are modified at runtime.
event Control::configuration_update_response(); event Control::configuration_update_response();
} }
event Control::shutdown_request() event Control::shutdown_request()
{ {
# Send the acknowledgement event. # Send the acknowledgement event.

View file

@ -17,4 +17,4 @@ event file_new(f: fa_file)
event file_entropy(f: fa_file, ent: entropy_test_result) event file_entropy(f: fa_file, ent: entropy_test_result)
{ {
f$info$entropy = ent$entropy; f$info$entropy = ent$entropy;
} }

View file

@ -56,7 +56,7 @@ event CaptureLoss::take_measurement(last_ts: time, last_acks: count, last_gaps:
} }
local now = network_time(); local now = network_time();
local g = get_gap_summary(); local g = get_gap_stats();
local acks = g$ack_events - last_acks; local acks = g$ack_events - last_acks;
local gaps = g$gap_events - last_gaps; local gaps = g$gap_events - last_gaps;
local pct_lost = (acks == 0) ? 0.0 : (100 * (1.0 * gaps) / (1.0 * acks)); local pct_lost = (acks == 0) ? 0.0 : (100 * (1.0 * gaps) / (1.0 * acks));

View file

@ -1,6 +1,4 @@
##! Log memory/packet/lag statistics. Differs from ##! Log memory/packet/lag statistics.
##! :doc:`/scripts/policy/misc/profiling.bro` in that this
##! is lighter-weight (much less info, and less load to generate).
@load base/frameworks/notice @load base/frameworks/notice
@ -10,7 +8,7 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## How often stats are reported. ## How often stats are reported.
const stats_report_interval = 1min &redef; const report_interval = 5min &redef;
type Info: record { type Info: record {
## Timestamp for the measurement. ## Timestamp for the measurement.
@ -21,27 +19,63 @@ export {
mem: count &log; mem: count &log;
## Number of packets processed since the last stats interval. ## Number of packets processed since the last stats interval.
pkts_proc: count &log; pkts_proc: count &log;
## Number of events processed since the last stats interval. ## Number of bytes received since the last stats interval if
events_proc: count &log;
## Number of events that have been queued since the last stats
## interval.
events_queued: count &log;
## Lag between the wall clock and packet timestamps if reading
## live traffic.
lag: interval &log &optional;
## Number of packets received since the last stats interval if
## reading live traffic. ## reading live traffic.
pkts_recv: count &log &optional; bytes_recv: count &log;
## Number of packets dropped since the last stats interval if ## Number of packets dropped since the last stats interval if
## reading live traffic. ## reading live traffic.
pkts_dropped: count &log &optional; pkts_dropped: count &log &optional;
## Number of packets seen on the link since the last stats ## Number of packets seen on the link since the last stats
## interval if reading live traffic. ## interval if reading live traffic.
pkts_link: count &log &optional; pkts_link: count &log &optional;
## Number of bytes received since the last stats interval if ## Lag between the wall clock and packet timestamps if reading
## reading live traffic. ## live traffic.
bytes_recv: count &log &optional; pkt_lag: interval &log &optional;
## Number of events processed since the last stats interval.
events_proc: count &log;
## Number of events that have been queued since the last stats
## interval.
events_queued: count &log;
## TCP connections currently in memory.
active_tcp_conns: count &log;
## UDP connections currently in memory.
active_udp_conns: count &log;
## ICMP connections currently in memory.
active_icmp_conns: count &log;
## TCP connections seen since last stats interval.
tcp_conns: count &log;
## UDP connections seen since last stats interval.
udp_conns: count &log;
## ICMP connections seen since last stats interval.
icmp_conns: count &log;
## Number of timers scheduled since last stats interval.
timers: count &log;
## Current number of scheduled timers.
active_timers: count &log;
## Number of files seen since last stats interval.
files: count &log;
## Current number of files actively being seen.
active_files: count &log;
## Number of DNS requests seen since last stats interval.
dns_requests: count &log;
## Current number of DNS requests awaiting a reply.
active_dns_requests: count &log;
## Current size of TCP data in reassembly.
reassem_tcp_size: count &log;
## Current size of File data in reassembly.
reassem_file_size: count &log;
## Current size of packet fragment data in reassembly.
reassem_frag_size: count &log;
## Current size of unkown data in reassembly (this is only PIA buffer right now).
reassem_unknown_size: count &log;
}; };
## Event to catch stats as they are written to the logging stream. ## Event to catch stats as they are written to the logging stream.
@ -53,38 +87,69 @@ event bro_init() &priority=5
Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats, $path="stats"]); Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats, $path="stats"]);
} }
event check_stats(last_ts: time, last_ns: NetStats, last_res: bro_resources) event check_stats(then: time, last_ns: NetStats, last_cs: ConnStats, last_ps: ProcStats, last_es: EventStats, last_rs: ReassemblerStats, last_ts: TimerStats, last_fs: FileAnalysisStats, last_ds: DNSStats)
{ {
local now = current_time(); local nettime = network_time();
local ns = net_stats(); local ns = get_net_stats();
local res = resource_usage(); local cs = get_conn_stats();
local ps = get_proc_stats();
local es = get_event_stats();
local rs = get_reassembler_stats();
local ts = get_timer_stats();
local fs = get_file_analysis_stats();
local ds = get_dns_stats();
if ( bro_is_terminating() ) if ( bro_is_terminating() )
# No more stats will be written or scheduled when Bro is # No more stats will be written or scheduled when Bro is
# shutting down. # shutting down.
return; return;
local info: Info = [$ts=now, $peer=peer_description, $mem=res$mem/1000000, local info: Info = [$ts=nettime,
$pkts_proc=res$num_packets - last_res$num_packets, $peer=peer_description,
$events_proc=res$num_events_dispatched - last_res$num_events_dispatched, $mem=ps$mem/1048576,
$events_queued=res$num_events_queued - last_res$num_events_queued]; $pkts_proc=ns$pkts_recvd - last_ns$pkts_recvd,
$bytes_recv = ns$bytes_recvd - last_ns$bytes_recvd,
$active_tcp_conns=cs$num_tcp_conns,
$tcp_conns=cs$cumulative_tcp_conns - last_cs$cumulative_tcp_conns,
$active_udp_conns=cs$num_udp_conns,
$udp_conns=cs$cumulative_udp_conns - last_cs$cumulative_udp_conns,
$active_icmp_conns=cs$num_icmp_conns,
$icmp_conns=cs$cumulative_icmp_conns - last_cs$cumulative_icmp_conns,
$reassem_tcp_size=rs$tcp_size,
$reassem_file_size=rs$file_size,
$reassem_frag_size=rs$frag_size,
$reassem_unknown_size=rs$unknown_size,
$events_proc=es$dispatched - last_es$dispatched,
$events_queued=es$queued - last_es$queued,
$timers=ts$cumulative - last_ts$cumulative,
$active_timers=ts$current,
$files=fs$cumulative - last_fs$cumulative,
$active_files=fs$current,
$dns_requests=ds$requests - last_ds$requests,
$active_dns_requests=ds$pending
];
# Someone's going to have to explain what this is and add a field to the Info record.
# info$util = 100.0*((ps$user_time + ps$system_time) - (last_ps$user_time + last_ps$system_time))/(now-then);
if ( reading_live_traffic() ) if ( reading_live_traffic() )
{ {
info$lag = now - network_time(); info$pkt_lag = current_time() - nettime;
# Someone's going to have to explain what this is and add a field to the Info record.
# info$util = 100.0*((res$user_time + res$system_time) - (last_res$user_time + last_res$system_time))/(now-last_ts);
info$pkts_recv = ns$pkts_recvd - last_ns$pkts_recvd;
info$pkts_dropped = ns$pkts_dropped - last_ns$pkts_dropped; info$pkts_dropped = ns$pkts_dropped - last_ns$pkts_dropped;
info$pkts_link = ns$pkts_link - last_ns$pkts_link; info$pkts_link = ns$pkts_link - last_ns$pkts_link;
info$bytes_recv = ns$bytes_recvd - last_ns$bytes_recvd;
} }
Log::write(Stats::LOG, info); Log::write(Stats::LOG, info);
schedule stats_report_interval { check_stats(now, ns, res) }; schedule report_interval { check_stats(nettime, ns, cs, ps, es, rs, ts, fs, ds) };
} }
event bro_init() event bro_init()
{ {
schedule stats_report_interval { check_stats(current_time(), net_stats(), resource_usage()) }; schedule report_interval { check_stats(network_time(), get_net_stats(), get_conn_stats(), get_proc_stats(), get_event_stats(), get_reassembler_stats(), get_timer_stats(), get_file_analysis_stats(), get_dns_stats()) };
} }

View file

@ -118,6 +118,7 @@ include(BifCl)
set(BIF_SRCS set(BIF_SRCS
bro.bif bro.bif
stats.bif
event.bif event.bif
const.bif const.bif
types.bif types.bif

View file

@ -108,9 +108,9 @@ bool ConnectionTimer::DoUnserialize(UnserialInfo* info)
return true; return true;
} }
unsigned int Connection::total_connections = 0; uint64 Connection::total_connections = 0;
unsigned int Connection::current_connections = 0; uint64 Connection::current_connections = 0;
unsigned int Connection::external_connections = 0; uint64 Connection::external_connections = 0;
IMPLEMENT_SERIAL(Connection, SER_CONNECTION); IMPLEMENT_SERIAL(Connection, SER_CONNECTION);

View file

@ -220,11 +220,11 @@ public:
unsigned int MemoryAllocation() const; unsigned int MemoryAllocation() const;
unsigned int MemoryAllocationConnVal() const; unsigned int MemoryAllocationConnVal() const;
static unsigned int TotalConnections() static uint64 TotalConnections()
{ return total_connections; } { return total_connections; }
static unsigned int CurrentConnections() static uint64 CurrentConnections()
{ return current_connections; } { return current_connections; }
static unsigned int CurrentExternalConnections() static uint64 CurrentExternalConnections()
{ return external_connections; } { return external_connections; }
// Returns true if the history was already seen, false otherwise. // Returns true if the history was already seen, false otherwise.
@ -315,9 +315,9 @@ protected:
unsigned int saw_first_orig_packet:1, saw_first_resp_packet:1; unsigned int saw_first_orig_packet:1, saw_first_resp_packet:1;
// Count number of connections. // Count number of connections.
static unsigned int total_connections; static uint64 total_connections;
static unsigned int current_connections; static uint64 current_connections;
static unsigned int external_connections; static uint64 external_connections;
string history; string history;
uint32 hist_seen; uint32 hist_seen;

View file

@ -346,6 +346,7 @@ DFA_State* DFA_State_Cache::Lookup(const NFA_state_list& nfas,
++misses; ++misses;
return 0; return 0;
} }
++hits;
delete *hash; delete *hash;
*hash = 0; *hash = 0;
@ -433,19 +434,6 @@ void DFA_Machine::Dump(FILE* f)
start_state->ClearMarks(); start_state->ClearMarks();
} }
void DFA_Machine::DumpStats(FILE* f)
{
DFA_State_Cache::Stats stats;
dfa_state_cache->GetStats(&stats);
fprintf(f, "Computed dfa_states = %d; Classes = %d; Computed trans. = %d; Uncomputed trans. = %d\n",
stats.dfa_states, EC()->NumClasses(),
stats.computed, stats.uncomputed);
fprintf(f, "DFA cache hits = %d; misses = %d\n",
stats.hits, stats.misses);
}
unsigned int DFA_Machine::MemoryAllocation() const unsigned int DFA_Machine::MemoryAllocation() const
{ {
DFA_State_Cache::Stats s; DFA_State_Cache::Stats s;

View file

@ -89,10 +89,9 @@ public:
int NumEntries() const { return states.Length(); } int NumEntries() const { return states.Length(); }
struct Stats { struct Stats {
unsigned int dfa_states; // Sum of all NFA states
// Sum over all NFA states per DFA state.
unsigned int nfa_states; unsigned int nfa_states;
unsigned int dfa_states;
unsigned int computed; unsigned int computed;
unsigned int uncomputed; unsigned int uncomputed;
unsigned int mem; unsigned int mem;
@ -132,7 +131,6 @@ public:
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
void Dump(FILE* f); void Dump(FILE* f);
void DumpStats(FILE* f);
unsigned int MemoryAllocation() const; unsigned int MemoryAllocation() const;

View file

@ -66,6 +66,7 @@ Dictionary::Dictionary(dict_order ordering, int initial_size)
delete_func = 0; delete_func = 0;
tbl_next_ind = 0; tbl_next_ind = 0;
cumulative_entries = 0;
num_buckets2 = num_entries2 = max_num_entries2 = thresh_entries2 = 0; num_buckets2 = num_entries2 = max_num_entries2 = thresh_entries2 = 0;
den_thresh2 = 0; den_thresh2 = 0;
} }
@ -444,6 +445,7 @@ void* Dictionary::Insert(DictEntry* new_entry, int copy_key)
// on lists than prepending. // on lists than prepending.
chain->append(new_entry); chain->append(new_entry);
++cumulative_entries;
if ( *max_num_entries_ptr < ++*num_entries_ptr ) if ( *max_num_entries_ptr < ++*num_entries_ptr )
*max_num_entries_ptr = *num_entries_ptr; *max_num_entries_ptr = *num_entries_ptr;

View file

@ -71,6 +71,12 @@ public:
max_num_entries + max_num_entries2 : max_num_entries; max_num_entries + max_num_entries2 : max_num_entries;
} }
// Total number of entries ever.
uint64 NumCumulativeInserts() const
{
return cumulative_entries;
}
// True if the dictionary is ordered, false otherwise. // True if the dictionary is ordered, false otherwise.
int IsOrdered() const { return order != 0; } int IsOrdered() const { return order != 0; }
@ -166,6 +172,7 @@ private:
int num_buckets; int num_buckets;
int num_entries; int num_entries;
int max_num_entries; int max_num_entries;
uint64 cumulative_entries;
double den_thresh; double den_thresh;
int thresh_entries; int thresh_entries;

View file

@ -10,8 +10,8 @@
EventMgr mgr; EventMgr mgr;
int num_events_queued = 0; uint64 num_events_queued = 0;
int num_events_dispatched = 0; uint64 num_events_dispatched = 0;
Event::Event(EventHandlerPtr arg_handler, val_list* arg_args, Event::Event(EventHandlerPtr arg_handler, val_list* arg_args,
SourceID arg_src, analyzer::ID arg_aid, TimerMgr* arg_mgr, SourceID arg_src, analyzer::ID arg_aid, TimerMgr* arg_mgr,

View file

@ -72,8 +72,8 @@ protected:
Event* next_event; Event* next_event;
}; };
extern int num_events_queued; extern uint64 num_events_queued;
extern int num_events_dispatched; extern uint64 num_events_dispatched;
class EventMgr : public BroObj { class EventMgr : public BroObj {
public: public:

View file

@ -28,7 +28,7 @@ void FragTimer::Dispatch(double t, int /* is_expire */)
FragReassembler::FragReassembler(NetSessions* arg_s, FragReassembler::FragReassembler(NetSessions* arg_s,
const IP_Hdr* ip, const u_char* pkt, const IP_Hdr* ip, const u_char* pkt,
HashKey* k, double t) HashKey* k, double t)
: Reassembler(0) : Reassembler(0, REASSEM_FRAG)
{ {
s = arg_s; s = arg_s;
key = k; key = k;

View file

@ -628,10 +628,12 @@ void builtin_error(const char* msg, BroObj* arg)
} }
#include "bro.bif.func_h" #include "bro.bif.func_h"
#include "stats.bif.func_h"
#include "reporter.bif.func_h" #include "reporter.bif.func_h"
#include "strings.bif.func_h" #include "strings.bif.func_h"
#include "bro.bif.func_def" #include "bro.bif.func_def"
#include "stats.bif.func_def"
#include "reporter.bif.func_def" #include "reporter.bif.func_def"
#include "strings.bif.func_def" #include "strings.bif.func_def"
@ -640,13 +642,22 @@ void builtin_error(const char* msg, BroObj* arg)
void init_builtin_funcs() void init_builtin_funcs()
{ {
bro_resources = internal_type("bro_resources")->AsRecordType(); ProcStats = internal_type("ProcStats")->AsRecordType();
net_stats = internal_type("NetStats")->AsRecordType(); NetStats = internal_type("NetStats")->AsRecordType();
matcher_stats = internal_type("matcher_stats")->AsRecordType(); MatcherStats = internal_type("MatcherStats")->AsRecordType();
ConnStats = internal_type("ConnStats")->AsRecordType();
ReassemblerStats = internal_type("ReassemblerStats")->AsRecordType();
DNSStats = internal_type("DNSStats")->AsRecordType();
GapStats = internal_type("GapStats")->AsRecordType();
EventStats = internal_type("EventStats")->AsRecordType();
TimerStats = internal_type("TimerStats")->AsRecordType();
FileAnalysisStats = internal_type("FileAnalysisStats")->AsRecordType();
ThreadStats = internal_type("ThreadStats")->AsRecordType();
var_sizes = internal_type("var_sizes")->AsTableType(); var_sizes = internal_type("var_sizes")->AsTableType();
gap_info = internal_type("gap_info")->AsRecordType();
#include "bro.bif.func_init" #include "bro.bif.func_init"
#include "stats.bif.func_init"
#include "reporter.bif.func_init" #include "reporter.bif.func_init"
#include "strings.bif.func_init" #include "strings.bif.func_init"

View file

@ -285,11 +285,6 @@ void NFA_Machine::Dump(FILE* f)
first_state->ClearMarks(); first_state->ClearMarks();
} }
void NFA_Machine::DumpStats(FILE* f)
{
fprintf(f, "highest NFA state ID is %d\n", nfa_state_id);
}
NFA_Machine* make_alternate(NFA_Machine* m1, NFA_Machine* m2) NFA_Machine* make_alternate(NFA_Machine* m1, NFA_Machine* m2)
{ {
if ( ! m1 ) if ( ! m1 )

View file

@ -105,7 +105,6 @@ public:
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
void Dump(FILE* f); void Dump(FILE* f);
void DumpStats(FILE* f);
unsigned int MemoryAllocation() const unsigned int MemoryAllocation() const
{ return padded_sizeof(*this) + first_state->TotalMemoryAllocation(); } { return padded_sizeof(*this) + first_state->TotalMemoryAllocation(); }

View file

@ -199,7 +199,6 @@ Val* pkt_profile_file;
int load_sample_freq; int load_sample_freq;
double gap_report_freq; double gap_report_freq;
RecordType* gap_info;
int packet_filter_default; int packet_filter_default;

View file

@ -202,9 +202,6 @@ extern Val* pkt_profile_file;
extern int load_sample_freq; extern int load_sample_freq;
extern double gap_report_freq;
extern RecordType* gap_info;
extern int packet_filter_default; extern int packet_filter_default;
extern int sig_max_group_size; extern int sig_max_group_size;

View file

@ -13,7 +13,7 @@ PriorityQueue::PriorityQueue(int initial_size)
{ {
max_heap_size = initial_size; max_heap_size = initial_size;
heap = new PQ_Element*[max_heap_size]; heap = new PQ_Element*[max_heap_size];
peak_heap_size = heap_size = 0; peak_heap_size = heap_size = cumulative_num = 0;
} }
PriorityQueue::~PriorityQueue() PriorityQueue::~PriorityQueue()
@ -62,6 +62,8 @@ int PriorityQueue::Add(PQ_Element* e)
BubbleUp(heap_size); BubbleUp(heap_size);
++cumulative_num;
if ( ++heap_size > peak_heap_size ) if ( ++heap_size > peak_heap_size )
peak_heap_size = heap_size; peak_heap_size = heap_size;

View file

@ -4,6 +4,7 @@
#define __PriorityQueue__ #define __PriorityQueue__
#include <math.h> #include <math.h>
#include "util.h"
class PriorityQueue; class PriorityQueue;
@ -53,6 +54,7 @@ public:
int Size() const { return heap_size; } int Size() const { return heap_size; }
int PeakSize() const { return peak_heap_size; } int PeakSize() const { return peak_heap_size; }
uint64 CumulativeNum() const { return cumulative_num; }
protected: protected:
int Resize(int new_size); int Resize(int new_size);
@ -92,6 +94,7 @@ protected:
int heap_size; int heap_size;
int peak_heap_size; int peak_heap_size;
int max_heap_size; int max_heap_size;
uint64 cumulative_num;
}; };
#endif #endif

View file

@ -1,6 +1,7 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
#include <algorithm> #include <algorithm>
#include <vector>
#include "bro-config.h" #include "bro-config.h"
@ -10,7 +11,8 @@
static const bool DEBUG_reassem = false; static const bool DEBUG_reassem = false;
DataBlock::DataBlock(const u_char* data, uint64 size, uint64 arg_seq, DataBlock::DataBlock(const u_char* data, uint64 size, uint64 arg_seq,
DataBlock* arg_prev, DataBlock* arg_next) DataBlock* arg_prev, DataBlock* arg_next,
ReassemblerType reassem_type)
{ {
seq = arg_seq; seq = arg_seq;
upper = seq + size; upper = seq + size;
@ -26,17 +28,21 @@ DataBlock::DataBlock(const u_char* data, uint64 size, uint64 arg_seq,
if ( next ) if ( next )
next->prev = this; next->prev = this;
rtype = reassem_type;
Reassembler::sizes[rtype] += pad_size(size) + padded_sizeof(DataBlock);
Reassembler::total_size += pad_size(size) + padded_sizeof(DataBlock); Reassembler::total_size += pad_size(size) + padded_sizeof(DataBlock);
} }
uint64 Reassembler::total_size = 0; uint64 Reassembler::total_size = 0;
uint64 Reassembler::sizes[REASSEM_NUM];
Reassembler::Reassembler(uint64 init_seq) Reassembler::Reassembler(uint64 init_seq, ReassemblerType reassem_type)
{ {
blocks = last_block = 0; blocks = last_block = 0;
old_blocks = last_old_block = 0; old_blocks = last_old_block = 0;
total_old_blocks = max_old_blocks = 0; total_old_blocks = max_old_blocks = 0;
trim_seq = last_reassem_seq = init_seq; trim_seq = last_reassem_seq = init_seq;
rtype = reassem_type;
} }
Reassembler::~Reassembler() Reassembler::~Reassembler()
@ -110,7 +116,7 @@ void Reassembler::NewBlock(double t, uint64 seq, uint64 len, const u_char* data)
if ( ! blocks ) if ( ! blocks )
blocks = last_block = start_block = blocks = last_block = start_block =
new DataBlock(data, len, seq, 0, 0); new DataBlock(data, len, seq, 0, 0, rtype);
else else
start_block = AddAndCheck(blocks, seq, upper_seq, data); start_block = AddAndCheck(blocks, seq, upper_seq, data);
@ -275,7 +281,7 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
if ( last_block && seq == last_block->upper ) if ( last_block && seq == last_block->upper )
{ {
last_block = new DataBlock(data, upper - seq, seq, last_block = new DataBlock(data, upper - seq, seq,
last_block, 0); last_block, 0, rtype);
return last_block; return last_block;
} }
@ -288,7 +294,7 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
{ {
// b is the last block, and it comes completely before // b is the last block, and it comes completely before
// the new block. // the new block.
last_block = new DataBlock(data, upper - seq, seq, b, 0); last_block = new DataBlock(data, upper - seq, seq, b, 0, rtype);
return last_block; return last_block;
} }
@ -297,7 +303,7 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
if ( upper <= b->seq ) if ( upper <= b->seq )
{ {
// The new block comes completely before b. // The new block comes completely before b.
new_b = new DataBlock(data, upper - seq, seq, b->prev, b); new_b = new DataBlock(data, upper - seq, seq, b->prev, b, rtype);
if ( b == blocks ) if ( b == blocks )
blocks = new_b; blocks = new_b;
return new_b; return new_b;
@ -308,7 +314,7 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
{ {
// The new block has a prefix that comes before b. // The new block has a prefix that comes before b.
uint64 prefix_len = b->seq - seq; uint64 prefix_len = b->seq - seq;
new_b = new DataBlock(data, prefix_len, seq, b->prev, b); new_b = new DataBlock(data, prefix_len, seq, b->prev, b, rtype);
if ( b == blocks ) if ( b == blocks )
blocks = new_b; blocks = new_b;
@ -342,6 +348,11 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
return new_b; return new_b;
} }
uint64 Reassembler::MemoryAllocation(ReassemblerType rtype)
{
return Reassembler::sizes[rtype];
}
bool Reassembler::Serialize(SerialInfo* info) const bool Reassembler::Serialize(SerialInfo* info) const
{ {
return SerialObj::Serialize(info); return SerialObj::Serialize(info);

View file

@ -6,10 +6,23 @@
#include "Obj.h" #include "Obj.h"
#include "IPAddr.h" #include "IPAddr.h"
// Whenever subclassing the Reassembler class
// you should add to this for known subclasses.
enum ReassemblerType {
REASSEM_UNKNOWN,
REASSEM_TCP,
REASSEM_FRAG,
REASSEM_FILE,
// Terminal value. Add new above.
REASSEM_NUM,
};
class DataBlock { class DataBlock {
public: public:
DataBlock(const u_char* data, uint64 size, uint64 seq, DataBlock(const u_char* data, uint64 size, uint64 seq,
DataBlock* prev, DataBlock* next); DataBlock* prev, DataBlock* next,
ReassemblerType reassem_type = REASSEM_UNKNOWN);
~DataBlock(); ~DataBlock();
@ -19,13 +32,12 @@ public:
DataBlock* prev; // previous block with lower seq # DataBlock* prev; // previous block with lower seq #
uint64 seq, upper; uint64 seq, upper;
u_char* block; u_char* block;
ReassemblerType rtype;
}; };
class Reassembler : public BroObj { class Reassembler : public BroObj {
public: public:
Reassembler(uint64 init_seq); Reassembler(uint64 init_seq, ReassemblerType reassem_type = REASSEM_UNKNOWN);
virtual ~Reassembler(); virtual ~Reassembler();
void NewBlock(double t, uint64 seq, uint64 len, const u_char* data); void NewBlock(double t, uint64 seq, uint64 len, const u_char* data);
@ -51,6 +63,9 @@ public:
// Sum over all data buffered in some reassembler. // Sum over all data buffered in some reassembler.
static uint64 TotalMemoryAllocation() { return total_size; } static uint64 TotalMemoryAllocation() { return total_size; }
// Data buffered by type of reassembler.
static uint64 MemoryAllocation(ReassemblerType rtype);
void SetMaxOldBlocks(uint32 count) { max_old_blocks = count; } void SetMaxOldBlocks(uint32 count) { max_old_blocks = count; }
protected: protected:
@ -82,12 +97,16 @@ protected:
uint32 max_old_blocks; uint32 max_old_blocks;
uint32 total_old_blocks; uint32 total_old_blocks;
ReassemblerType rtype;
static uint64 total_size; static uint64 total_size;
static uint64 sizes[REASSEM_NUM];
}; };
inline DataBlock::~DataBlock() inline DataBlock::~DataBlock()
{ {
Reassembler::total_size -= pad_size(upper - seq) + padded_sizeof(DataBlock); Reassembler::total_size -= pad_size(upper - seq) + padded_sizeof(DataBlock);
Reassembler::sizes[rtype] -= pad_size(upper - seq) + padded_sizeof(DataBlock);
delete [] block; delete [] block;
} }

View file

@ -1174,7 +1174,7 @@ void RuleMatcher::GetStats(Stats* stats, RuleHdrTest* hdr_test)
stats->mem = 0; stats->mem = 0;
stats->hits = 0; stats->hits = 0;
stats->misses = 0; stats->misses = 0;
stats->avg_nfa_states = 0; stats->nfa_states = 0;
hdr_test = root; hdr_test = root;
} }
@ -1195,15 +1195,10 @@ void RuleMatcher::GetStats(Stats* stats, RuleHdrTest* hdr_test)
stats->mem += cstats.mem; stats->mem += cstats.mem;
stats->hits += cstats.hits; stats->hits += cstats.hits;
stats->misses += cstats.misses; stats->misses += cstats.misses;
stats->avg_nfa_states += cstats.nfa_states; stats->nfa_states += cstats.nfa_states;
} }
} }
if ( stats->dfa_states )
stats->avg_nfa_states /= stats->dfa_states;
else
stats->avg_nfa_states = 0;
for ( RuleHdrTest* h = hdr_test->child; h; h = h->sibling ) for ( RuleHdrTest* h = hdr_test->child; h; h = h->sibling )
GetStats(stats, h); GetStats(stats, h);
} }

View file

@ -297,6 +297,9 @@ public:
struct Stats { struct Stats {
unsigned int matchers; // # distinct RE matchers unsigned int matchers; // # distinct RE matchers
// NFA states across all matchers.
unsigned int nfa_states;
// # DFA states across all matchers // # DFA states across all matchers
unsigned int dfa_states; unsigned int dfa_states;
unsigned int computed; // # computed DFA state transitions unsigned int computed; // # computed DFA state transitions
@ -305,9 +308,6 @@ public:
// # cache hits (sampled, multiply by MOVE_TO_FRONT_SAMPLE_SIZE) // # cache hits (sampled, multiply by MOVE_TO_FRONT_SAMPLE_SIZE)
unsigned int hits; unsigned int hits;
unsigned int misses; // # cache misses unsigned int misses; // # cache misses
// Average # NFA states per DFA state.
unsigned int avg_nfa_states;
}; };
Val* BuildRuleStateValue(const Rule* rule, Val* BuildRuleStateValue(const Rule* rule,

View file

@ -1156,19 +1156,18 @@ void NetSessions::Drain()
void NetSessions::GetStats(SessionStats& s) const void NetSessions::GetStats(SessionStats& s) const
{ {
s.num_TCP_conns = tcp_conns.Length(); s.num_TCP_conns = tcp_conns.Length();
s.cumulative_TCP_conns = tcp_conns.NumCumulativeInserts();
s.num_UDP_conns = udp_conns.Length(); s.num_UDP_conns = udp_conns.Length();
s.cumulative_UDP_conns = udp_conns.NumCumulativeInserts();
s.num_ICMP_conns = icmp_conns.Length(); s.num_ICMP_conns = icmp_conns.Length();
s.cumulative_ICMP_conns = icmp_conns.NumCumulativeInserts();
s.num_fragments = fragments.Length(); s.num_fragments = fragments.Length();
s.num_packets = num_packets_processed; s.num_packets = num_packets_processed;
s.num_timers = timer_mgr->Size();
s.num_events_queued = num_events_queued;
s.num_events_dispatched = num_events_dispatched;
s.max_TCP_conns = tcp_conns.MaxLength(); s.max_TCP_conns = tcp_conns.MaxLength();
s.max_UDP_conns = udp_conns.MaxLength(); s.max_UDP_conns = udp_conns.MaxLength();
s.max_ICMP_conns = icmp_conns.MaxLength(); s.max_ICMP_conns = icmp_conns.MaxLength();
s.max_fragments = fragments.MaxLength(); s.max_fragments = fragments.MaxLength();
s.max_timers = timer_mgr->PeakSize();
} }
Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,

View file

@ -32,19 +32,20 @@ namespace analyzer { namespace arp { class ARP_Analyzer; } }
struct SessionStats { struct SessionStats {
int num_TCP_conns; int num_TCP_conns;
int num_UDP_conns;
int num_ICMP_conns;
int num_fragments;
int num_packets;
int num_timers;
int num_events_queued;
int num_events_dispatched;
int max_TCP_conns; int max_TCP_conns;
uint64 cumulative_TCP_conns;
int num_UDP_conns;
int max_UDP_conns; int max_UDP_conns;
uint64 cumulative_UDP_conns;
int num_ICMP_conns;
int max_ICMP_conns; int max_ICMP_conns;
uint64 cumulative_ICMP_conns;
int num_fragments;
int max_fragments; int max_fragments;
int max_timers; uint64 num_packets;
}; };
// Drains and deletes a timer manager if it hasn't seen any advances // Drains and deletes a timer manager if it hasn't seen any advances
@ -242,7 +243,7 @@ protected:
OSFingerprint* SYN_OS_Fingerprinter; OSFingerprint* SYN_OS_Fingerprinter;
int build_backdoor_analyzer; int build_backdoor_analyzer;
int dump_this_packet; // if true, current packet should be recorded int dump_this_packet; // if true, current packet should be recorded
int num_packets_processed; uint64 num_packets_processed;
PacketProfiler* pkt_profiler; PacketProfiler* pkt_profiler;
// We may use independent timer managers for different sets of related // We may use independent timer managers for different sets of related

View file

@ -14,7 +14,7 @@
#include "broker/Manager.h" #include "broker/Manager.h"
#endif #endif
int killed_by_inactivity = 0; uint64 killed_by_inactivity = 0;
uint64 tot_ack_events = 0; uint64 tot_ack_events = 0;
uint64 tot_ack_bytes = 0; uint64 tot_ack_bytes = 0;
@ -82,7 +82,7 @@ void ProfileLogger::Log()
struct timeval tv_utime = r.ru_utime; struct timeval tv_utime = r.ru_utime;
struct timeval tv_stime = r.ru_stime; struct timeval tv_stime = r.ru_stime;
unsigned int total, malloced; uint64 total, malloced;
get_memory_usage(&total, &malloced); get_memory_usage(&total, &malloced);
static unsigned int first_total = 0; static unsigned int first_total = 0;
@ -110,7 +110,7 @@ void ProfileLogger::Log()
file->Write(fmt("\n%.06f ------------------------\n", network_time)); file->Write(fmt("\n%.06f ------------------------\n", network_time));
} }
file->Write(fmt("%.06f Memory: total=%dK total_adj=%dK malloced: %dK\n", file->Write(fmt("%.06f Memory: total=%" PRId64 "K total_adj=%" PRId64 "K malloced: %" PRId64 "K\n",
network_time, total / 1024, (total - first_total) / 1024, network_time, total / 1024, (total - first_total) / 1024,
malloced / 1024)); malloced / 1024));
@ -120,7 +120,7 @@ void ProfileLogger::Log()
int conn_mem_use = expensive ? sessions->ConnectionMemoryUsage() : 0; int conn_mem_use = expensive ? sessions->ConnectionMemoryUsage() : 0;
file->Write(fmt("%.06f Conns: total=%d current=%d/%d ext=%d mem=%dK avg=%.1f table=%dK connvals=%dK\n", file->Write(fmt("%.06f Conns: total=%" PRIu64 " current=%" PRIu64 "/%" PRIi32 " ext=%" PRIu64 " mem=%" PRIi32 "K avg=%.1f table=%" PRIu32 "K connvals=%" PRIu32 "K\n",
network_time, network_time,
Connection::TotalConnections(), Connection::TotalConnections(),
Connection::CurrentConnections(), Connection::CurrentConnections(),
@ -161,10 +161,10 @@ void ProfileLogger::Log()
)); ));
*/ */
file->Write(fmt("%.06f Connections expired due to inactivity: %d\n", file->Write(fmt("%.06f Connections expired due to inactivity: %" PRIu64 "\n",
network_time, killed_by_inactivity)); network_time, killed_by_inactivity));
file->Write(fmt("%.06f Total reassembler data: %" PRIu64"K\n", network_time, file->Write(fmt("%.06f Total reassembler data: %" PRIu64 "K\n", network_time,
Reassembler::TotalMemoryAllocation() / 1024)); Reassembler::TotalMemoryAllocation() / 1024));
// Signature engine. // Signature engine.
@ -173,9 +173,9 @@ void ProfileLogger::Log()
RuleMatcher::Stats stats; RuleMatcher::Stats stats;
rule_matcher->GetStats(&stats); rule_matcher->GetStats(&stats);
file->Write(fmt("%06f RuleMatcher: matchers=%d dfa_states=%d ncomputed=%d " file->Write(fmt("%06f RuleMatcher: matchers=%d nfa_states=%d dfa_states=%d "
"mem=%dK avg_nfa_states=%d\n", network_time, stats.matchers, "ncomputed=%d mem=%dK\n", network_time, stats.matchers,
stats.dfa_states, stats.computed, stats.mem / 1024, stats.avg_nfa_states)); stats.nfa_states, stats.dfa_states, stats.computed, stats.mem / 1024));
} }
file->Write(fmt("%.06f Timers: current=%d max=%d mem=%dK lag=%.2fs\n", file->Write(fmt("%.06f Timers: current=%d max=%d mem=%dK lag=%.2fs\n",
@ -469,10 +469,10 @@ void PacketProfiler::ProfilePkt(double t, unsigned int bytes)
double curr_Rtime = double curr_Rtime =
ptimestamp.tv_sec + ptimestamp.tv_usec / 1e6; ptimestamp.tv_sec + ptimestamp.tv_usec / 1e6;
unsigned int curr_mem; uint64 curr_mem;
get_memory_usage(&curr_mem, 0); get_memory_usage(&curr_mem, 0);
file->Write(fmt("%.06f %.03f %d %d %.03f %.03f %.03f %d\n", file->Write(fmt("%.06f %.03f %" PRIu64 " %" PRIu64 " %.03f %.03f %.03f %" PRIu64 "\n",
t, time-last_timestamp, pkt_cnt, byte_cnt, t, time-last_timestamp, pkt_cnt, byte_cnt,
curr_Rtime - last_Rtime, curr_Rtime - last_Rtime,
curr_Utime - last_Utime, curr_Utime - last_Utime,

View file

@ -102,7 +102,7 @@ extern ProfileLogger* segment_logger;
extern SampleLogger* sample_logger; extern SampleLogger* sample_logger;
// Connection statistics. // Connection statistics.
extern int killed_by_inactivity; extern uint64 killed_by_inactivity;
// Content gap statistics. // Content gap statistics.
extern uint64 tot_ack_events; extern uint64 tot_ack_events;
@ -127,9 +127,9 @@ protected:
double update_freq; double update_freq;
double last_Utime, last_Stime, last_Rtime; double last_Utime, last_Stime, last_Rtime;
double last_timestamp, time; double last_timestamp, time;
unsigned int last_mem; uint64 last_mem;
unsigned int pkt_cnt; uint64 pkt_cnt;
unsigned int byte_cnt; uint64 byte_cnt;
}; };
#endif #endif

View file

@ -109,11 +109,12 @@ public:
virtual int Size() const = 0; virtual int Size() const = 0;
virtual int PeakSize() const = 0; virtual int PeakSize() const = 0;
virtual uint64 CumulativeNum() const = 0;
double LastTimestamp() const { return last_timestamp; } double LastTimestamp() const { return last_timestamp; }
// Returns time of last advance in global network time. // Returns time of last advance in global network time.
double LastAdvance() const { return last_advance; } double LastAdvance() const { return last_advance; }
static unsigned int* CurrentTimers() { return current_timers; } static unsigned int* CurrentTimers() { return current_timers; }
protected: protected:
@ -148,6 +149,7 @@ public:
int Size() const { return q->Size(); } int Size() const { return q->Size(); }
int PeakSize() const { return q->PeakSize(); } int PeakSize() const { return q->PeakSize(); }
uint64 CumulativeNum() const { return q->CumulativeNum(); }
unsigned int MemoryUsage() const; unsigned int MemoryUsage() const;
protected: protected:
@ -170,6 +172,7 @@ public:
int Size() const { return cq_size(cq); } int Size() const { return cq_size(cq); }
int PeakSize() const { return cq_max_size(cq); } int PeakSize() const { return cq_max_size(cq); }
uint64 CumulativeNum() const { return cq_cumulative_num(cq); }
unsigned int MemoryUsage() const; unsigned int MemoryUsage() const;
protected: protected:

View file

@ -395,7 +395,7 @@ bool Analyzer::AddChildAnalyzer(Analyzer* analyzer, bool init)
// the list. // the list.
analyzer->parent = this; analyzer->parent = this;
children.push_back(analyzer); new_children.push_back(analyzer);
if ( init ) if ( init )
analyzer->Init(); analyzer->Init();
@ -474,6 +474,13 @@ Analyzer* Analyzer::FindChild(ID arg_id)
return child; return child;
} }
LOOP_OVER_GIVEN_CHILDREN(i, new_children)
{
Analyzer* child = (*i)->FindChild(arg_id);
if ( child )
return child;
}
return 0; return 0;
} }
@ -489,6 +496,13 @@ Analyzer* Analyzer::FindChild(Tag arg_tag)
return child; return child;
} }
LOOP_OVER_GIVEN_CHILDREN(i, new_children)
{
Analyzer* child = (*i)->FindChild(arg_tag);
if ( child )
return child;
}
return 0; return 0;
} }

View file

@ -427,6 +427,10 @@ public:
/** /**
* Returns a list of all direct child analyzers. * Returns a list of all direct child analyzers.
*
* Note that this does not include the list of analyzers that are
* currently queued up to be added. If you just added an analyzer,
* it will not immediately be in this list.
*/ */
const analyzer_list& GetChildren() { return children; } const analyzer_list& GetChildren() { return children; }

View file

@ -361,7 +361,6 @@ bool Manager::BuildInitialAnalyzerTree(Connection* conn)
icmp::ICMP_Analyzer* icmp = 0; icmp::ICMP_Analyzer* icmp = 0;
TransportLayerAnalyzer* root = 0; TransportLayerAnalyzer* root = 0;
pia::PIA* pia = 0; pia::PIA* pia = 0;
bool analyzed = false;
bool check_port = false; bool check_port = false;
switch ( conn->ConnTransport() ) { switch ( conn->ConnTransport() ) {
@ -383,7 +382,6 @@ bool Manager::BuildInitialAnalyzerTree(Connection* conn)
case TRANSPORT_ICMP: { case TRANSPORT_ICMP: {
root = icmp = new icmp::ICMP_Analyzer(conn); root = icmp = new icmp::ICMP_Analyzer(conn);
DBG_ANALYZER(conn, "activated ICMP analyzer"); DBG_ANALYZER(conn, "activated ICMP analyzer");
analyzed = true;
break; break;
} }
@ -495,16 +493,10 @@ bool Manager::BuildInitialAnalyzerTree(Connection* conn)
if ( pia ) if ( pia )
root->AddChildAnalyzer(pia->AsAnalyzer()); root->AddChildAnalyzer(pia->AsAnalyzer());
if ( root->GetChildren().size() )
analyzed = true;
conn->SetRootAnalyzer(root, pia); conn->SetRootAnalyzer(root, pia);
root->Init(); root->Init();
root->InitChildren(); root->InitChildren();
if ( ! analyzed )
conn->SetLifetime(non_analyzed_lifetime);
PLUGIN_HOOK_VOID(HOOK_SETUP_ANALYZER_TREE, HookSetupAnalyzerTree(conn)); PLUGIN_HOOK_VOID(HOOK_SETUP_ANALYZER_TREE, HookSetupAnalyzerTree(conn));
return true; return true;

View file

@ -45,4 +45,5 @@ add_subdirectory(syslog)
add_subdirectory(tcp) add_subdirectory(tcp)
add_subdirectory(teredo) add_subdirectory(teredo)
add_subdirectory(udp) add_subdirectory(udp)
add_subdirectory(xmpp)
add_subdirectory(zip) add_subdirectory(zip)

View file

@ -756,6 +756,7 @@ void SMTP_Analyzer::UpdateState(const int cmd_code, const int reply_code, bool o
break; break;
case SMTP_CMD_STARTTLS: case SMTP_CMD_STARTTLS:
case SMTP_CMD_X_ANONYMOUSTLS:
if ( st != SMTP_READY ) if ( st != SMTP_READY )
UnexpectedCommand(cmd_code, reply_code); UnexpectedCommand(cmd_code, reply_code);
@ -818,6 +819,10 @@ int SMTP_Analyzer::ParseCmd(int cmd_len, const char* cmd)
if ( ! cmd ) if ( ! cmd )
return -1; return -1;
// special case because we cannot define our usual macros with "-"
if ( strncmp(cmd, "X-ANONYMOUSTLS", cmd_len) == 0 )
return SMTP_CMD_X_ANONYMOUSTLS;
for ( int code = SMTP_CMD_EHLO; code < SMTP_CMD_LAST; ++code ) for ( int code = SMTP_CMD_EHLO; code < SMTP_CMD_LAST; ++code )
if ( ! strncasecmp(cmd, smtp_cmd_word[code - SMTP_CMD_EHLO], cmd_len) ) if ( ! strncasecmp(cmd, smtp_cmd_word[code - SMTP_CMD_EHLO], cmd_len) )
return code; return code;

View file

@ -30,7 +30,7 @@ typedef enum {
SMTP_IN_DATA, // 6: after DATA SMTP_IN_DATA, // 6: after DATA
SMTP_AFTER_DATA, // 7: after . and before reply SMTP_AFTER_DATA, // 7: after . and before reply
SMTP_IN_AUTH, // 8: after AUTH and 334 SMTP_IN_AUTH, // 8: after AUTH and 334
SMTP_IN_TLS, // 9: after STARTTLS and 220 SMTP_IN_TLS, // 9: after STARTTLS/X-ANONYMOUSTLS and 220
SMTP_QUIT, // 10: after QUIT SMTP_QUIT, // 10: after QUIT
SMTP_AFTER_GAP, // 11: after a gap is detected SMTP_AFTER_GAP, // 11: after a gap is detected
SMTP_GAP_RECOVERY, // 12: after the first reply after a gap SMTP_GAP_RECOVERY, // 12: after the first reply after a gap

View file

@ -11,6 +11,8 @@ SMTP_CMD_DEF(VRFY)
SMTP_CMD_DEF(EXPN) SMTP_CMD_DEF(EXPN)
SMTP_CMD_DEF(HELP) SMTP_CMD_DEF(HELP)
SMTP_CMD_DEF(NOOP) SMTP_CMD_DEF(NOOP)
SMTP_CMD_DEF(STARTTLS) // RFC 2487
SMTP_CMD_DEF(X_ANONYMOUSTLS)
// The following two commands never explicitly appear in user input. // The following two commands never explicitly appear in user input.
SMTP_CMD_DEF(CONN_ESTABLISHMENT) // not an explicit SMTP command SMTP_CMD_DEF(CONN_ESTABLISHMENT) // not an explicit SMTP command
@ -20,15 +22,14 @@ SMTP_CMD_DEF(END_OF_DATA) // not an explicit SMTP command
// become deprecated (RFC 2821). // become deprecated (RFC 2821).
// Client SHOULD NOT use SEND/SOML/SAML // Client SHOULD NOT use SEND/SOML/SAML
SMTP_CMD_DEF(SEND) SMTP_CMD_DEF(SEND)
SMTP_CMD_DEF(SOML) SMTP_CMD_DEF(SOML)
SMTP_CMD_DEF(SAML) SMTP_CMD_DEF(SAML)
// System SHOULD NOT support TURN in absence of authentication. // System SHOULD NOT support TURN in absence of authentication.
SMTP_CMD_DEF(TURN) SMTP_CMD_DEF(TURN)
// SMTP extensions not supported yet. // SMTP extensions not supported yet.
SMTP_CMD_DEF(STARTTLS) // RFC 2487
SMTP_CMD_DEF(BDAT) // RFC 3030 SMTP_CMD_DEF(BDAT) // RFC 3030
SMTP_CMD_DEF(ETRN) // RFC 1985 SMTP_CMD_DEF(ETRN) // RFC 1985
SMTP_CMD_DEF(AUTH) // RFC 2554 SMTP_CMD_DEF(AUTH) // RFC 2554

View file

@ -99,8 +99,8 @@ event smtp_data%(c: connection, is_orig: bool, data: string%);
## .. bro:see:: smtp_data smtp_request smtp_reply ## .. bro:see:: smtp_data smtp_request smtp_reply
event smtp_unexpected%(c: connection, is_orig: bool, msg: string, detail: string%); event smtp_unexpected%(c: connection, is_orig: bool, msg: string, detail: string%);
## Generated if a connection switched to using TLS using STARTTLS. After this ## Generated if a connection switched to using TLS using STARTTLS or X-ANONYMOUSTLS.
## event no more SMTP events will be raised for the connection. See the SSL ## After this event no more SMTP events will be raised for the connection. See the SSL
## analyzer for related SSL events, which will now be generated. ## analyzer for related SSL events, which will now be generated.
## ##
## c: The connection. ## c: The connection.

View file

@ -120,7 +120,7 @@ event ssh1_server_host_key%(c: connection, p: string, e: string%);
## This event is generated when an :abbr:`SSH (Secure Shell)` ## This event is generated when an :abbr:`SSH (Secure Shell)`
## encrypted packet is seen. This event is not handled by default, but ## encrypted packet is seen. This event is not handled by default, but
## is provided for heuristic analysis scripts. Note that you have to set ## is provided for heuristic analysis scripts. Note that you have to set
## :bro:id:`SSH::skip_processing_after_detection` to false to use this ## :bro:id:`SSH::disable_analyzer_after_detection` to false to use this
## event. This carries a performance penalty. ## event. This carries a performance penalty.
## ##
## c: The connection over which the :abbr:`SSH (Secure Shell)` ## c: The connection over which the :abbr:`SSH (Secure Shell)`

View file

@ -35,6 +35,11 @@ void DTLS_Analyzer::Done()
void DTLS_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, uint64 seq, const IP_Hdr* ip, int caplen) void DTLS_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, uint64 seq, const IP_Hdr* ip, int caplen)
{ {
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen); Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
// In this case the packet is a STUN packet. Skip it without complaining.
if ( len > 20 && data[4] == 0x21 && data[5] == 0x12 && data[6] == 0xa4 && data[7] == 0x42 )
return;
interp->NewData(orig, data, data + len); interp->NewData(orig, data, data + len);
} }

View file

@ -75,7 +75,7 @@ type ClientHello(rec: HandshakeRecord) = record {
session_len : uint8; session_len : uint8;
session_id : uint8[session_len]; session_id : uint8[session_len];
dtls_cookie: case client_version of { dtls_cookie: case client_version of {
DTLSv10 -> cookie: ClientHelloCookie(rec); DTLSv10, DTLSv12 -> cookie: ClientHelloCookie(rec);
default -> nothing: bytestring &length=0; default -> nothing: bytestring &length=0;
}; };
csuit_len : uint16 &check(csuit_len > 1 && csuit_len % 2 == 0); csuit_len : uint16 &check(csuit_len > 1 && csuit_len % 2 == 0);

View file

@ -408,11 +408,6 @@ void TCP_Analyzer::EnableReassembly()
TCP_Reassembler::Forward, orig), TCP_Reassembler::Forward, orig),
new TCP_Reassembler(this, this, new TCP_Reassembler(this, this,
TCP_Reassembler::Forward, resp)); TCP_Reassembler::Forward, resp));
reassembling = 1;
if ( new_connection_contents )
Event(new_connection_contents);
} }
void TCP_Analyzer::SetReassembler(TCP_Reassembler* rorig, void TCP_Analyzer::SetReassembler(TCP_Reassembler* rorig,
@ -423,10 +418,10 @@ void TCP_Analyzer::SetReassembler(TCP_Reassembler* rorig,
resp->AddReassembler(rresp); resp->AddReassembler(rresp);
rresp->SetDstAnalyzer(this); rresp->SetDstAnalyzer(this);
reassembling = 1; if ( new_connection_contents && reassembling == 0 )
if ( new_connection_contents )
Event(new_connection_contents); Event(new_connection_contents);
reassembling = 1;
} }
const struct tcphdr* TCP_Analyzer::ExtractTCP_Header(const u_char*& data, const struct tcphdr* TCP_Analyzer::ExtractTCP_Header(const u_char*& data,

View file

@ -5,9 +5,6 @@
#include "analyzer/protocol/tcp/TCP.h" #include "analyzer/protocol/tcp/TCP.h"
#include "TCP_Endpoint.h" #include "TCP_Endpoint.h"
// Only needed for gap_report events.
#include "Event.h"
#include "events.bif.h" #include "events.bif.h"
using namespace analyzer::tcp; using namespace analyzer::tcp;
@ -18,17 +15,11 @@ const bool DEBUG_tcp_contents = false;
const bool DEBUG_tcp_connection_close = false; const bool DEBUG_tcp_connection_close = false;
const bool DEBUG_tcp_match_undelivered = false; const bool DEBUG_tcp_match_undelivered = false;
static double last_gap_report = 0.0;
static uint64 last_ack_events = 0;
static uint64 last_ack_bytes = 0;
static uint64 last_gap_events = 0;
static uint64 last_gap_bytes = 0;
TCP_Reassembler::TCP_Reassembler(analyzer::Analyzer* arg_dst_analyzer, TCP_Reassembler::TCP_Reassembler(analyzer::Analyzer* arg_dst_analyzer,
TCP_Analyzer* arg_tcp_analyzer, TCP_Analyzer* arg_tcp_analyzer,
TCP_Reassembler::Type arg_type, TCP_Reassembler::Type arg_type,
TCP_Endpoint* arg_endp) TCP_Endpoint* arg_endp)
: Reassembler(1) : Reassembler(1, REASSEM_TCP)
{ {
dst_analyzer = arg_dst_analyzer; dst_analyzer = arg_dst_analyzer;
tcp_analyzer = arg_tcp_analyzer; tcp_analyzer = arg_tcp_analyzer;
@ -45,7 +36,7 @@ TCP_Reassembler::TCP_Reassembler(analyzer::Analyzer* arg_dst_analyzer,
if ( tcp_max_old_segments ) if ( tcp_max_old_segments )
SetMaxOldBlocks(tcp_max_old_segments); SetMaxOldBlocks(tcp_max_old_segments);
if ( tcp_contents ) if ( ::tcp_contents )
{ {
// Val dst_port_val(ntohs(Conn()->RespPort()), TYPE_PORT); // Val dst_port_val(ntohs(Conn()->RespPort()), TYPE_PORT);
PortVal dst_port_val(ntohs(tcp_analyzer->Conn()->RespPort()), PortVal dst_port_val(ntohs(tcp_analyzer->Conn()->RespPort()),
@ -387,7 +378,6 @@ void TCP_Reassembler::BlockInserted(DataBlock* start_block)
{ // New stuff. { // New stuff.
uint64 len = b->Size(); uint64 len = b->Size();
uint64 seq = last_reassem_seq; uint64 seq = last_reassem_seq;
last_reassem_seq += len; last_reassem_seq += len;
if ( record_contents_file ) if ( record_contents_file )
@ -548,35 +538,6 @@ void TCP_Reassembler::AckReceived(uint64 seq)
tot_gap_bytes += num_missing; tot_gap_bytes += num_missing;
tcp_analyzer->Event(ack_above_hole); tcp_analyzer->Event(ack_above_hole);
} }
double dt = network_time - last_gap_report;
if ( gap_report && gap_report_freq > 0.0 &&
dt >= gap_report_freq )
{
uint64 devents = tot_ack_events - last_ack_events;
uint64 dbytes = tot_ack_bytes - last_ack_bytes;
uint64 dgaps = tot_gap_events - last_gap_events;
uint64 dgap_bytes = tot_gap_bytes - last_gap_bytes;
RecordVal* r = new RecordVal(gap_info);
r->Assign(0, new Val(devents, TYPE_COUNT));
r->Assign(1, new Val(dbytes, TYPE_COUNT));
r->Assign(2, new Val(dgaps, TYPE_COUNT));
r->Assign(3, new Val(dgap_bytes, TYPE_COUNT));
val_list* vl = new val_list;
vl->append(new IntervalVal(dt, Seconds));
vl->append(r);
mgr.QueueEvent(gap_report, vl);
last_gap_report = network_time;
last_ack_events = tot_ack_events;
last_ack_bytes = tot_ack_bytes;
last_gap_events = tot_gap_events;
last_gap_bytes = tot_gap_bytes;
}
} }
// Check EOF here because t_reassem->LastReassemSeq() may have // Check EOF here because t_reassem->LastReassemSeq() may have

View file

@ -63,26 +63,6 @@ function get_resp_seq%(cid: conn_id%): count
} }
%} %}
## Returns statistics about TCP gaps.
##
## Returns: A record with TCP gap statistics.
##
## .. bro:see:: do_profiling
## net_stats
## resource_usage
## dump_rule_stats
## get_matcher_stats
function get_gap_summary%(%): gap_info
%{
RecordVal* r = new RecordVal(gap_info);
r->Assign(0, new Val(tot_ack_events, TYPE_COUNT));
r->Assign(1, new Val(tot_ack_bytes, TYPE_COUNT));
r->Assign(2, new Val(tot_gap_events, TYPE_COUNT));
r->Assign(3, new Val(tot_gap_bytes, TYPE_COUNT));
return r;
%}
## Associates a file handle with a connection for writing TCP byte stream ## Associates a file handle with a connection for writing TCP byte stream
## contents. ## contents.
## ##

View file

@ -0,0 +1,12 @@
include(BroPlugin)
include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR})
bro_plugin_begin(Bro XMPP)
bro_plugin_cc(Plugin.cc)
bro_plugin_cc(XMPP.cc)
bro_plugin_bif(events.bif)
bro_plugin_pac(xmpp.pac xmpp-analyzer.pac xmpp-protocol.pac)
bro_plugin_end()

Some files were not shown because too many files have changed in this diff Show more