Merge branch 'master' into topic/jgras/intel-update

This commit is contained in:
Jan Grashoefer 2016-05-11 18:34:15 +02:00
commit 859eb5eac7
306 changed files with 6721 additions and 3148 deletions

143
CHANGES
View file

@ -1,4 +1,147 @@
2.4-544 | 2016-05-07 12:19:07 -0700
* Switching all use of gmtime and localtime to use reentrant
variants. (Seth Hall)
2.4-541 | 2016-05-06 17:58:45 -0700
* A set of new built-in function for gathering execution statistics:
get_net_stats(), get_conn_stats(), get_proc_stats(),
get_event_stats(), get_reassembler_stats(), get_dns_stats(),
get_timer_stats(), get_file_analysis_stats(), get_thread_stats(),
get_gap_stats(), get_matcher_stats().
net_stats() resource_usage() have been superseded by these. (Seth
Hall)
* New policy script misc/stats.bro that records Bro execution
statistics in a standard Bro log file. (Seth Hall)
* A series of documentation improvements. (Daniel Thayer)
* Rudimentary XMPP StartTLS analyzer. It parses certificates out of
XMPP connections using StartTLS. It aborts processing if StartTLS
is not found. (Johanna Amann)
2.4-507 | 2016-05-03 11:18:16 -0700
* Fix incorrect type tags in Bro broker source code. These are just
used for error reporting. (Daniel Thayer)
* Update docs and tests of the fmt() function. (Daniel Thayer)
2.4-500 | 2016-05-03 11:16:50 -0700
* Updating submodule(s).
2.4-498 | 2016-04-28 11:34:52 -0700
* Rename Broker::print to Broker::send_print and Broker::event to
Broker::send_event to avoid using reserved keywords as function
names. (Daniel Thayer)
* Add script wrapper functions for Broker BIFs. This faciliates
documenting them through Broxygen. (Daniel Thayer)
* Extend, update, and clean up Broker tests. (Daniel Thayer)
* Intel: Allow to provide uid/fuid instead of conn/file. (Johanna
Amann)
* Provide file IDs for hostname matches in certificates. (Johanna
Amann)
* Rudimentary IMAP StartTLS analyzer. It parses certificates out of
IMAP connections using StartTLS. It aborts processing if StartTLS
is not found. (Johanna Amann)
2.4-478 | 2016-04-28 09:56:24
* Fix parsing of x509 pre-y2k dates. (Johanna Amann)
* Fix small error in bif documentation. (Johanna Amann)
* Fix unknown data link type error message. (Vitaly Repin)
* Correcting spelling errors. (Jeannette Dopheide)
* Minor cleanup in ARP analyzer. (Johanna Amann)
* Fix parsing of pre-y2k dates in X509 certificates. (Johanna Amann)
* Fix small error in get_current_packet documentation. (Johanna Amann)
2.4-471 | 2016-04-25 15:37:15 -0700
* Add DNS tests for huge TLLs and CAA. (Johanna Amann)
* Add DNS "CAA" RR type and event. (Mark Taylor)
* Fix DNS response parsing: TTLs are unsigned. (Mark Taylor)
2.4-466 | 2016-04-22 16:25:33 -0700
* Rename BrokerStore and BrokerComm to Broker. Also split broker main.bro
into two scripts. (Daniel Thayer)
* Add get_current_packet_header bif. (Jan Grashoefer)
2.4-457 | 2016-04-22 08:36:27 -0700
* Fix Intel framework not checking the CERT_HASH indicator type. (Johanna Amann)
2.4-454 | 2016-04-14 10:06:58 -0400
* Additional mime types for file identification and a few fixes. (Seth Hall)
New file mime types:
- .ini files
- MS Registry policy files
- MS Registry files
- MS Registry format files (e.g. DESKTOP.DAT)
- MS Outlook PST files
- Apple AFPInfo files
Mime type fixes:
- MP3 files with ID3 tags.
- JSON and XML matchers were extended
* Avoid a macro name conflict on FreeBSD. (Seth Hall, Daniel Thayer)
2.4-452 | 2016-04-13 01:15:20 -0400
* Add a simple file entropy analyzer. (Seth Hall)
* Analyzer and bro script for RFB/VNC protocol (Martin van Hensbergen)
This analyzer parses the Remote Frame Buffer
protocol, usually referred to as the 'VNC protocol'.
It supports several dialects (3.3, 3.7, 3.8) and
also handles the Apple Remote Desktop variant.
It will log such facts as client/server versions,
authentication method used, authentication result,
height, width and name of the shared screen.
2.4-430 | 2016-04-07 13:36:36 -0700
* Fix regex literal in scripting documentation. (William Tom)
2.4-428 | 2016-04-07 13:33:08 -0700
* Confirm protocol in SNMP/SIP only if we saw a response SNMP/SIP
packet. (Vlad Grigorescu)
2.4-424 | 2016-03-24 13:38:47 -0700
* Only load openflow/netcontrol if compiled with broker. (Johanna Amann)
* Adding canonifier to test. (Robin Sommer)
2.4-422 | 2016-03-21 19:48:30 -0700 2.4-422 | 2016-03-21 19:48:30 -0700
* Adapt to recent change in CAF CMake script. (Matthias Vallentin) * Adapt to recent change in CAF CMake script. (Matthias Vallentin)

33
NEWS
View file

@ -26,11 +26,27 @@ New Functionality
- Bro now includes the NetControl framework. The framework allows for easy - Bro now includes the NetControl framework. The framework allows for easy
interaction of Bro with hard- and software switches, firewalls, etc. interaction of Bro with hard- and software switches, firewalls, etc.
- There is a new file entropy analyzer for files.
- Bro now supports the remote framebuffer protocol (RFB) that is used by
VNC servers for remote graphical displays.
- Bro now supports the Radiotap header for 802.11 frames. - Bro now supports the Radiotap header for 802.11 frames.
- Bro now has rudimentary IMAP and XMPP analyzers examinig the initial
phases of the protocol. Right now these analyzer only identify
STARTTLS sessions, handing them over to TLS analysis. The analyzer
does not yet analyze any further IMAP/XMPP content.
- Bro now tracks VLAN IDs. To record them inside the connection log, - Bro now tracks VLAN IDs. To record them inside the connection log,
load protocols/conn/vlan-logging.bro. load protocols/conn/vlan-logging.bro.
- The new misc/stats.bro records Bro executions statistics in a
standard Bro log file.
- A new dns_CAA_reply event gives access to DNS Certification Authority
Authorization replies.
- A new per-packet event raw_packet() provides access to layer 2 - A new per-packet event raw_packet() provides access to layer 2
information. Use with care, generating events per packet is information. Use with care, generating events per packet is
expensive. expensive.
@ -40,6 +56,9 @@ New Functionality
argument that will be used for decoding errors into weird.log argument that will be used for decoding errors into weird.log
(instead of reporter.log). (instead of reporter.log).
- A new get_current_packet_header bif returns the headers of the current
packet.
- Two new built-in functions for handling set[subnet] and table[subnet]: - Two new built-in functions for handling set[subnet] and table[subnet]:
- check_subnet(subnet, table) checks if a specific subnet is a member - check_subnet(subnet, table) checks if a specific subnet is a member
@ -67,6 +86,13 @@ New Functionality
- The IRC analyzer now recognizes StartTLS sessions and enable the SSL - The IRC analyzer now recognizes StartTLS sessions and enable the SSL
analyzer for them. analyzer for them.
- A set of new built-in function for gathering execution statistics:
get_net_stats(), get_conn_stats(), get_proc_stats(),
get_event_stats(), get_reassembler_stats(), get_dns_stats(),
get_timer_stats(), get_file_analysis_stats(), get_thread_stats(),
get_gap_stats(), get_matcher_stats(),
- New Bro plugins in aux/plugins: - New Bro plugins in aux/plugins:
- af_packet: Native AF_PACKET support. - af_packet: Native AF_PACKET support.
@ -79,9 +105,16 @@ New Functionality
Changed Functionality Changed Functionality
--------------------- ---------------------
- The BrokerComm and BrokerStore namespaces were renamed to Broker.
The Broker "print" function was renamed to Broker::send_print, and
"event" to "Broker::send_event".
- ``SSH::skip_processing_after_detection`` was removed. The functionality was - ``SSH::skip_processing_after_detection`` was removed. The functionality was
replaced by ``SSH::disable_analyzer_after_detection``. replaced by ``SSH::disable_analyzer_after_detection``.
- ``net_stats()`` and ``resource_usage()`` have been superseded by the
new execution statistics functions (see above).
- Some script-level identifier have changed their names: - Some script-level identifier have changed their names:
snaplen -> Pcap::snaplen snaplen -> Pcap::snaplen

View file

@ -1 +1 @@
2.4-422 2.4-544

@ -1 +1 @@
Subproject commit 424d40c1e8d5888311b50c0e5a9dfc9c5f818b66 Subproject commit 4179f9f00f4df21e4bcfece0323ec3468f688e8a

@ -1 +1 @@
Subproject commit 105dfe4ad6c4ae4563b21cb0466ee350f0af0d43 Subproject commit cb771a3cf592d46643eea35d206b9f3e1a0758f7

@ -1 +1 @@
Subproject commit 6ded82da498d805def6aa129cd7691d3b7287c37 Subproject commit b4d1686cdd3f5505e405667b1083e8335cae6928

@ -1 +1 @@
Subproject commit 583f3a3ff1847cf96a87f865d5cf0f36fae9dd67 Subproject commit 6f12b4da74e9e0885e1bd8cb67c2eda2b33c93a5

@ -1 +1 @@
Subproject commit 6684ab5109f526fb535013760f17a4c8dff093ae Subproject commit bb3f55f198f9cfd5e545345dd6425dd08ca1d45e

@ -1 +1 @@
Subproject commit ab61be0c4f128c976f72dfa5a09a87cd842f387a Subproject commit 6bd2ac48466b57cdda84a593faebc25a59d98a51

View file

@ -14,6 +14,9 @@
/* We are on a Linux system */ /* We are on a Linux system */
#cmakedefine HAVE_LINUX #cmakedefine HAVE_LINUX
/* We are on a Mac OS X (Darwin) system */
#cmakedefine HAVE_DARWIN
/* Define if you have the `mallinfo' function. */ /* Define if you have the `mallinfo' function. */
#cmakedefine HAVE_MALLINFO #cmakedefine HAVE_MALLINFO

2
cmake

@ -1 +1 @@
Subproject commit 537e45afe1006a10f73847fab5f13d28ce43fc4d Subproject commit 0a2b36874ad5c1a22829135f8aeeac534469053f

View file

@ -96,13 +96,13 @@ logging is done remotely to the manager, and normally very little is written
to disk. to disk.
The rule of thumb we have followed recently is to allocate approximately 1 The rule of thumb we have followed recently is to allocate approximately 1
core for every 80Mbps of traffic that is being analyzed. However, this core for every 250Mbps of traffic that is being analyzed. However, this
estimate could be extremely traffic mix-specific. It has generally worked estimate could be extremely traffic mix-specific. It has generally worked
for mixed traffic with many users and servers. For example, if your traffic for mixed traffic with many users and servers. For example, if your traffic
peaks around 2Gbps (combined) and you want to handle traffic at peak load, peaks around 2Gbps (combined) and you want to handle traffic at peak load,
you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps you may want to have 8 cores available (2048 / 250 == 8.2). If the 250Mbps
estimate works for your traffic, this could be handled by 3 physical hosts estimate works for your traffic, this could be handled by 2 physical hosts
dedicated to being workers with each one containing dual 6-core processors. dedicated to being workers with each one containing a quad-core processor.
Once a flow-based load balancer is put into place this model is extremely Once a flow-based load balancer is put into place this model is extremely
easy to scale. It is recommended that you estimate the amount of easy to scale. It is recommended that you estimate the amount of

View file

@ -0,0 +1 @@
../../../../aux/plugins/kafka/README

View file

@ -17,20 +17,20 @@ Connecting to Peers
=================== ===================
Communication via Broker must first be turned on via Communication via Broker must first be turned on via
:bro:see:`BrokerComm::enable`. :bro:see:`Broker::enable`.
Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen` Bro can accept incoming connections by calling :bro:see:`Broker::listen`
and then monitor connection status updates via the and then monitor connection status updates via the
:bro:see:`BrokerComm::incoming_connection_established` and :bro:see:`Broker::incoming_connection_established` and
:bro:see:`BrokerComm::incoming_connection_broken` events. :bro:see:`Broker::incoming_connection_broken` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro
Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect` Bro can initiate outgoing connections by calling :bro:see:`Broker::connect`
and then monitor connection status updates via the and then monitor connection status updates via the
:bro:see:`BrokerComm::outgoing_connection_established`, :bro:see:`Broker::outgoing_connection_established`,
:bro:see:`BrokerComm::outgoing_connection_broken`, and :bro:see:`Broker::outgoing_connection_broken`, and
:bro:see:`BrokerComm::outgoing_connection_incompatible` events. :bro:see:`Broker::outgoing_connection_incompatible` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro
@ -38,14 +38,14 @@ Remote Printing
=============== ===============
To receive remote print messages, first use the To receive remote print messages, first use the
:bro:see:`BrokerComm::subscribe_to_prints` function to advertise to peers a :bro:see:`Broker::subscribe_to_prints` function to advertise to peers a
topic prefix of interest and then create an event handler for topic prefix of interest and then create an event handler for
:bro:see:`BrokerComm::print_handler` to handle any print messages that are :bro:see:`Broker::print_handler` to handle any print messages that are
received. received.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-listener.bro
To send remote print messages, just call :bro:see:`BrokerComm::print`. To send remote print messages, just call :bro:see:`Broker::send_print`.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-connector.bro
@ -69,14 +69,14 @@ Remote Events
============= =============
Receiving remote events is similar to remote prints. Just use the Receiving remote events is similar to remote prints. Just use the
:bro:see:`BrokerComm::subscribe_to_events` function and possibly define any :bro:see:`Broker::subscribe_to_events` function and possibly define any
new events along with handlers that peers may want to send. new events along with handlers that peers may want to send.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro
There are two different ways to send events. The first is to call the There are two different ways to send events. The first is to call the
:bro:see:`BrokerComm::event` function directly. The second option is to call :bro:see:`Broker::send_event` function directly. The second option is to call
the :bro:see:`BrokerComm::auto_event` function where you specify a the :bro:see:`Broker::auto_event` function where you specify a
particular event that will be automatically sent to peers whenever the particular event that will be automatically sent to peers whenever the
event is called locally via the normal event invocation syntax. event is called locally via the normal event invocation syntax.
@ -104,14 +104,14 @@ Remote Logging
.. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro
Use the :bro:see:`BrokerComm::subscribe_to_logs` function to advertise interest Use the :bro:see:`Broker::subscribe_to_logs` function to advertise interest
in logs written by peers. The topic names that Bro uses are implicitly of the in logs written by peers. The topic names that Bro uses are implicitly of the
form "bro/log/<stream-name>". form "bro/log/<stream-name>".
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro
To send remote logs either redef :bro:see:`Log::enable_remote_logging` or To send remote logs either redef :bro:see:`Log::enable_remote_logging` or
use the :bro:see:`BrokerComm::enable_remote_logs` function. The former use the :bro:see:`Broker::enable_remote_logs` function. The former
allows any log stream to be sent to peers while the latter enables remote allows any log stream to be sent to peers while the latter enables remote
logging for particular streams. logging for particular streams.
@ -137,24 +137,24 @@ Tuning Access Control
By default, endpoints do not restrict the message topics that it sends By default, endpoints do not restrict the message topics that it sends
to peers and do not restrict what message topics and data store to peers and do not restrict what message topics and data store
identifiers get advertised to peers. These are the default identifiers get advertised to peers. These are the default
:bro:see:`BrokerComm::EndpointFlags` supplied to :bro:see:`BrokerComm::enable`. :bro:see:`Broker::EndpointFlags` supplied to :bro:see:`Broker::enable`.
If not using the ``auto_publish`` flag, one can use the If not using the ``auto_publish`` flag, one can use the
:bro:see:`BrokerComm::publish_topic` and :bro:see:`BrokerComm::unpublish_topic` :bro:see:`Broker::publish_topic` and :bro:see:`Broker::unpublish_topic`
functions to manipulate the set of message topics (must match exactly) functions to manipulate the set of message topics (must match exactly)
that are allowed to be sent to peer endpoints. These settings take that are allowed to be sent to peer endpoints. These settings take
precedence over the per-message ``peers`` flag supplied to functions precedence over the per-message ``peers`` flag supplied to functions
that take a :bro:see:`BrokerComm::SendFlags` such as :bro:see:`BrokerComm::print`, that take a :bro:see:`Broker::SendFlags` such as :bro:see:`Broker::send_print`,
:bro:see:`BrokerComm::event`, :bro:see:`BrokerComm::auto_event` or :bro:see:`Broker::send_event`, :bro:see:`Broker::auto_event` or
:bro:see:`BrokerComm::enable_remote_logs`. :bro:see:`Broker::enable_remote_logs`.
If not using the ``auto_advertise`` flag, one can use the If not using the ``auto_advertise`` flag, one can use the
:bro:see:`BrokerComm::advertise_topic` and :bro:see:`Broker::advertise_topic` and
:bro:see:`BrokerComm::unadvertise_topic` functions :bro:see:`Broker::unadvertise_topic` functions
to manipulate the set of topic prefixes that are allowed to be to manipulate the set of topic prefixes that are allowed to be
advertised to peers. If an endpoint does not advertise a topic prefix, then advertised to peers. If an endpoint does not advertise a topic prefix, then
the only way peers can send messages to it is via the ``unsolicited`` the only way peers can send messages to it is via the ``unsolicited``
flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching flag of :bro:see:`Broker::SendFlags` and choosing a topic with a matching
prefix (i.e. full topic may be longer than receivers prefix, just the prefix (i.e. full topic may be longer than receivers prefix, just the
prefix needs to match). prefix needs to match).
@ -192,8 +192,8 @@ last modification time.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/stores-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/stores-connector.bro
In the above example, if a local copy of the store contents isn't In the above example, if a local copy of the store contents isn't
needed, just replace the :bro:see:`BrokerStore::create_clone` call with needed, just replace the :bro:see:`Broker::create_clone` call with
:bro:see:`BrokerStore::create_frontend`. Queries will then be made against :bro:see:`Broker::create_frontend`. Queries will then be made against
the remote master store instead of the local clone. the remote master store instead of the local clone.
Note that all data store queries must be made within Bro's asynchronous Note that all data store queries must be made within Bro's asynchronous

View file

@ -1,18 +1,18 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef Broker::endpoint_name = "connector";
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1sec); Broker::connect("127.0.0.1", broker_port, 1sec);
} }
event BrokerComm::outgoing_connection_established(peer_address: string, event Broker::outgoing_connection_established(peer_address: string,
peer_port: port, peer_port: port,
peer_name: string) peer_name: string)
{ {
print "BrokerComm::outgoing_connection_established", print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name; peer_address, peer_port, peer_name;
terminate(); terminate();
} }

View file

@ -1,20 +1,20 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef Broker::endpoint_name = "listener";
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::listen(broker_port, "127.0.0.1"); Broker::listen(broker_port, "127.0.0.1");
} }
event BrokerComm::incoming_connection_established(peer_name: string) event Broker::incoming_connection_established(peer_name: string)
{ {
print "BrokerComm::incoming_connection_established", peer_name; print "Broker::incoming_connection_established", peer_name;
} }
event BrokerComm::incoming_connection_broken(peer_name: string) event Broker::incoming_connection_broken(peer_name: string)
{ {
print "BrokerComm::incoming_connection_broken", peer_name; print "Broker::incoming_connection_broken", peer_name;
terminate(); terminate();
} }

View file

@ -1,30 +1,30 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef Broker::endpoint_name = "connector";
global my_event: event(msg: string, c: count); global my_event: event(msg: string, c: count);
global my_auto_event: event(msg: string, c: count); global my_auto_event: event(msg: string, c: count);
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1sec); Broker::connect("127.0.0.1", broker_port, 1sec);
BrokerComm::auto_event("bro/event/my_auto_event", my_auto_event); Broker::auto_event("bro/event/my_auto_event", my_auto_event);
} }
event BrokerComm::outgoing_connection_established(peer_address: string, event Broker::outgoing_connection_established(peer_address: string,
peer_port: port, peer_port: port,
peer_name: string) peer_name: string)
{ {
print "BrokerComm::outgoing_connection_established", print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name; peer_address, peer_port, peer_name;
BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "hi", 0)); Broker::send_event("bro/event/my_event", Broker::event_args(my_event, "hi", 0));
event my_auto_event("stuff", 88); event my_auto_event("stuff", 88);
BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "...", 1)); Broker::send_event("bro/event/my_event", Broker::event_args(my_event, "...", 1));
event my_auto_event("more stuff", 51); event my_auto_event("more stuff", 51);
BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "bye", 2)); Broker::send_event("bro/event/my_event", Broker::event_args(my_event, "bye", 2));
} }
event BrokerComm::outgoing_connection_broken(peer_address: string, event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port) peer_port: port)
{ {
terminate(); terminate();

View file

@ -1,20 +1,20 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef Broker::endpoint_name = "listener";
global msg_count = 0; global msg_count = 0;
global my_event: event(msg: string, c: count); global my_event: event(msg: string, c: count);
global my_auto_event: event(msg: string, c: count); global my_auto_event: event(msg: string, c: count);
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::subscribe_to_events("bro/event/"); Broker::subscribe_to_events("bro/event/");
BrokerComm::listen(broker_port, "127.0.0.1"); Broker::listen(broker_port, "127.0.0.1");
} }
event BrokerComm::incoming_connection_established(peer_name: string) event Broker::incoming_connection_established(peer_name: string)
{ {
print "BrokerComm::incoming_connection_established", peer_name; print "Broker::incoming_connection_established", peer_name;
} }
event my_event(msg: string, c: count) event my_event(msg: string, c: count)

View file

@ -2,16 +2,16 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef Broker::endpoint_name = "connector";
redef Log::enable_local_logging = F; redef Log::enable_local_logging = F;
redef Log::enable_remote_logging = F; redef Log::enable_remote_logging = F;
global n = 0; global n = 0;
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::enable_remote_logs(Test::LOG); Broker::enable_remote_logs(Test::LOG);
BrokerComm::connect("127.0.0.1", broker_port, 1sec); Broker::connect("127.0.0.1", broker_port, 1sec);
} }
event do_write() event do_write()
@ -24,16 +24,16 @@ event do_write()
event do_write(); event do_write();
} }
event BrokerComm::outgoing_connection_established(peer_address: string, event Broker::outgoing_connection_established(peer_address: string,
peer_port: port, peer_port: port,
peer_name: string) peer_name: string)
{ {
print "BrokerComm::outgoing_connection_established", print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name; peer_address, peer_port, peer_name;
event do_write(); event do_write();
} }
event BrokerComm::outgoing_connection_broken(peer_address: string, event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port) peer_port: port)
{ {
terminate(); terminate();

View file

@ -2,18 +2,18 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef Broker::endpoint_name = "listener";
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::subscribe_to_logs("bro/log/Test::LOG"); Broker::subscribe_to_logs("bro/log/Test::LOG");
BrokerComm::listen(broker_port, "127.0.0.1"); Broker::listen(broker_port, "127.0.0.1");
} }
event BrokerComm::incoming_connection_established(peer_name: string) event Broker::incoming_connection_established(peer_name: string)
{ {
print "BrokerComm::incoming_connection_established", peer_name; print "Broker::incoming_connection_established", peer_name;
} }
event Test::log_test(rec: Test::Info) event Test::log_test(rec: Test::Info)

View file

@ -1,25 +1,25 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef Broker::endpoint_name = "connector";
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1sec); Broker::connect("127.0.0.1", broker_port, 1sec);
} }
event BrokerComm::outgoing_connection_established(peer_address: string, event Broker::outgoing_connection_established(peer_address: string,
peer_port: port, peer_port: port,
peer_name: string) peer_name: string)
{ {
print "BrokerComm::outgoing_connection_established", print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name; peer_address, peer_port, peer_name;
BrokerComm::print("bro/print/hi", "hello"); Broker::send_print("bro/print/hi", "hello");
BrokerComm::print("bro/print/stuff", "..."); Broker::send_print("bro/print/stuff", "...");
BrokerComm::print("bro/print/bye", "goodbye"); Broker::send_print("bro/print/bye", "goodbye");
} }
event BrokerComm::outgoing_connection_broken(peer_address: string, event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port) peer_port: port)
{ {
terminate(); terminate();

View file

@ -1,21 +1,21 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef Broker::endpoint_name = "listener";
global msg_count = 0; global msg_count = 0;
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::subscribe_to_prints("bro/print/"); Broker::subscribe_to_prints("bro/print/");
BrokerComm::listen(broker_port, "127.0.0.1"); Broker::listen(broker_port, "127.0.0.1");
} }
event BrokerComm::incoming_connection_established(peer_name: string) event Broker::incoming_connection_established(peer_name: string)
{ {
print "BrokerComm::incoming_connection_established", peer_name; print "Broker::incoming_connection_established", peer_name;
} }
event BrokerComm::print_handler(msg: string) event Broker::print_handler(msg: string)
{ {
++msg_count; ++msg_count;
print "got print message", msg; print "got print message", msg;

View file

@ -1,42 +1,42 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
global h: opaque of BrokerStore::Handle; global h: opaque of Broker::Handle;
function dv(d: BrokerComm::Data): BrokerComm::DataVector function dv(d: Broker::Data): Broker::DataVector
{ {
local rval: BrokerComm::DataVector; local rval: Broker::DataVector;
rval[0] = d; rval[0] = d;
return rval; return rval;
} }
global ready: event(); global ready: event();
event BrokerComm::outgoing_connection_broken(peer_address: string, event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port) peer_port: port)
{ {
terminate(); terminate();
} }
event BrokerComm::outgoing_connection_established(peer_address: string, event Broker::outgoing_connection_established(peer_address: string,
peer_port: port, peer_port: port,
peer_name: string) peer_name: string)
{ {
local myset: set[string] = {"a", "b", "c"}; local myset: set[string] = {"a", "b", "c"};
local myvec: vector of string = {"alpha", "beta", "gamma"}; local myvec: vector of string = {"alpha", "beta", "gamma"};
h = BrokerStore::create_master("mystore"); h = Broker::create_master("mystore");
BrokerStore::insert(h, BrokerComm::data("one"), BrokerComm::data(110)); Broker::insert(h, Broker::data("one"), Broker::data(110));
BrokerStore::insert(h, BrokerComm::data("two"), BrokerComm::data(223)); Broker::insert(h, Broker::data("two"), Broker::data(223));
BrokerStore::insert(h, BrokerComm::data("myset"), BrokerComm::data(myset)); Broker::insert(h, Broker::data("myset"), Broker::data(myset));
BrokerStore::insert(h, BrokerComm::data("myvec"), BrokerComm::data(myvec)); Broker::insert(h, Broker::data("myvec"), Broker::data(myvec));
BrokerStore::increment(h, BrokerComm::data("one")); Broker::increment(h, Broker::data("one"));
BrokerStore::decrement(h, BrokerComm::data("two")); Broker::decrement(h, Broker::data("two"));
BrokerStore::add_to_set(h, BrokerComm::data("myset"), BrokerComm::data("d")); Broker::add_to_set(h, Broker::data("myset"), Broker::data("d"));
BrokerStore::remove_from_set(h, BrokerComm::data("myset"), BrokerComm::data("b")); Broker::remove_from_set(h, Broker::data("myset"), Broker::data("b"));
BrokerStore::push_left(h, BrokerComm::data("myvec"), dv(BrokerComm::data("delta"))); Broker::push_left(h, Broker::data("myvec"), dv(Broker::data("delta")));
BrokerStore::push_right(h, BrokerComm::data("myvec"), dv(BrokerComm::data("omega"))); Broker::push_right(h, Broker::data("myvec"), dv(Broker::data("omega")));
when ( local res = BrokerStore::size(h) ) when ( local res = Broker::size(h) )
{ {
print "master size", res; print "master size", res;
event ready(); event ready();
@ -47,7 +47,7 @@ event BrokerComm::outgoing_connection_established(peer_address: string,
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1secs); Broker::connect("127.0.0.1", broker_port, 1secs);
BrokerComm::auto_event("bro/event/ready", ready); Broker::auto_event("bro/event/ready", ready);
} }

View file

@ -1,13 +1,13 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
global h: opaque of BrokerStore::Handle; global h: opaque of Broker::Handle;
global expected_key_count = 4; global expected_key_count = 4;
global key_count = 0; global key_count = 0;
function do_lookup(key: string) function do_lookup(key: string)
{ {
when ( local res = BrokerStore::lookup(h, BrokerComm::data(key)) ) when ( local res = Broker::lookup(h, Broker::data(key)) )
{ {
++key_count; ++key_count;
print "lookup", key, res; print "lookup", key, res;
@ -21,15 +21,15 @@ function do_lookup(key: string)
event ready() event ready()
{ {
h = BrokerStore::create_clone("mystore"); h = Broker::create_clone("mystore");
when ( local res = BrokerStore::keys(h) ) when ( local res = Broker::keys(h) )
{ {
print "clone keys", res; print "clone keys", res;
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 0))); do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 0)));
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 1))); do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 1)));
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 2))); do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 2)));
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 3))); do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 3)));
} }
timeout 10sec timeout 10sec
{ print "timeout"; } { print "timeout"; }
@ -37,7 +37,7 @@ event ready()
event bro_init() event bro_init()
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::subscribe_to_events("bro/event/ready"); Broker::subscribe_to_events("bro/event/ready");
BrokerComm::listen(broker_port, "127.0.0.1"); Broker::listen(broker_port, "127.0.0.1");
} }

View file

@ -13,6 +13,6 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
BrokerComm::enable(); Broker::enable();
Log::create_stream(Test::LOG, [$columns=Test::Info, $ev=log_test, $path="test"]); Log::create_stream(Test::LOG, [$columns=Test::Info, $ev=log_test, $path="test"]);
} }

View file

@ -39,6 +39,8 @@ Network Protocols
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| rdp.log | RDP | :bro:type:`RDP::Info` | | rdp.log | RDP | :bro:type:`RDP::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| rfb.log | Remote Framebuffer (RFB) | :bro:type:`RFB::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| sip.log | SIP | :bro:type:`SIP::Info` | | sip.log | SIP | :bro:type:`SIP::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| smtp.log | SMTP transactions | :bro:type:`SMTP::Info` | | smtp.log | SMTP transactions | :bro:type:`SMTP::Info` |

View file

@ -277,16 +277,25 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: delete .. bro:keyword:: delete
The "delete" statement is used to remove an element from a The "delete" statement is used to remove an element from a
:bro:type:`set` or :bro:type:`table`. Nothing happens if the :bro:type:`set` or :bro:type:`table`, or to remove a value from
specified element does not exist in the set or table. a :bro:type:`record` field that has the :bro:attr:`&optional` attribute.
When attempting to remove an element from a set or table,
nothing happens if the specified index does not exist.
When attempting to remove a value from an "&optional" record field,
nothing happens if that field doesn't have a value.
Example:: Example::
local myset = set("this", "test"); local myset = set("this", "test");
local mytable = table(["key1"] = 80/tcp, ["key2"] = 53/udp); local mytable = table(["key1"] = 80/tcp, ["key2"] = 53/udp);
local myrec = MyRecordType($a = 1, $b = 2);
delete myset["test"]; delete myset["test"];
delete mytable["key1"]; delete mytable["key1"];
# In this example, "b" must have the "&optional" attribute
delete myrec$b;
.. bro:keyword:: event .. bro:keyword:: event
The "event" statement immediately queues invocation of an event handler. The "event" statement immediately queues invocation of an event handler.
@ -306,30 +315,33 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: for .. bro:keyword:: for
A "for" loop iterates over each element in a string, set, vector, or A "for" loop iterates over each element in a string, set, vector, or
table and executes a statement for each iteration. Currently, table and executes a statement for each iteration (note that the order
modifying a container's membership while iterating over it may in which the loop iterates over the elements in a set or a table is
result in undefined behavior, so avoid adding or removing elements nondeterministic). However, no loop iterations occur if the string,
inside the loop. set, vector, or table is empty.
For each iteration of the loop, a loop variable will be assigned to an For each iteration of the loop, a loop variable will be assigned to an
element if the expression evaluates to a string or set, or an index if element if the expression evaluates to a string or set, or an index if
the expression evaluates to a vector or table. Then the statement the expression evaluates to a vector or table. Then the statement
is executed. However, the statement will not be executed if the expression is executed.
evaluates to an object with no elements.
If the expression is a table or a set with more than one index, then the If the expression is a table or a set with more than one index, then the
loop variable must be specified as a comma-separated list of different loop variable must be specified as a comma-separated list of different
loop variables (one for each index), enclosed in brackets. loop variables (one for each index), enclosed in brackets.
A :bro:keyword:`break` statement can be used at any time to immediately
terminate the "for" loop, and a :bro:keyword:`next` statement can be
used to skip to the next loop iteration.
Note that the loop variable in a "for" statement is not allowed to be Note that the loop variable in a "for" statement is not allowed to be
a global variable, and it does not need to be declared prior to the "for" a global variable, and it does not need to be declared prior to the "for"
statement. The type will be inferred from the elements of the statement. The type will be inferred from the elements of the
expression. expression.
Currently, modifying a container's membership while iterating over it may
result in undefined behavior, so do not add or remove elements
inside the loop.
A :bro:keyword:`break` statement will immediately terminate the "for"
loop, and a :bro:keyword:`next` statement will skip to the next loop
iteration.
Example:: Example::
local myset = set(80/tcp, 81/tcp); local myset = set(80/tcp, 81/tcp);
@ -532,8 +544,6 @@ Here are the statements that the Bro scripting language supports.
end with either a :bro:keyword:`break`, :bro:keyword:`fallthrough`, or end with either a :bro:keyword:`break`, :bro:keyword:`fallthrough`, or
:bro:keyword:`return` statement (although "return" is allowed only :bro:keyword:`return` statement (although "return" is allowed only
if the "switch" statement is inside a function, hook, or event handler). if the "switch" statement is inside a function, hook, or event handler).
If a "case" (or "default") block contain more than one statement, then
there is no need to wrap them in braces.
Note that the braces in a "switch" statement are always required (these Note that the braces in a "switch" statement are always required (these
do not indicate the presence of a `compound statement`_), and that no do not indicate the presence of a `compound statement`_), and that no
@ -604,12 +614,9 @@ Here are the statements that the Bro scripting language supports.
if ( skip_ahead() ) if ( skip_ahead() )
next; next;
[...]
if ( finish_up ) if ( finish_up )
break; break;
[...]
} }
.. _compound statement: .. _compound statement:

View file

@ -0,0 +1,25 @@
module Conn;
export {
## The record type which contains column fields of the connection log.
type Info: record {
ts: time &log;
uid: string &log;
id: conn_id &log;
proto: transport_proto &log;
service: string &log &optional;
duration: interval &log &optional;
orig_bytes: count &log &optional;
resp_bytes: count &log &optional;
conn_state: string &log &optional;
local_orig: bool &log &optional;
local_resp: bool &log &optional;
missed_bytes: count &log &default=0;
history: string &log &optional;
orig_pkts: count &log &optional;
orig_ip_bytes: count &log &optional;
resp_pkts: count &log &optional;
resp_ip_bytes: count &log &optional;
tunnel_parents: set[string] &log;
};
}

View file

@ -0,0 +1,7 @@
module HTTP;
export {
## This setting changes if passwords used in Basic-Auth are captured or
## not.
const default_capture_password = F &redef;
}

View file

@ -362,8 +362,7 @@ decrypted from HTTP streams is stored in
:bro:see:`HTTP::default_capture_password` as shown in the stripped down :bro:see:`HTTP::default_capture_password` as shown in the stripped down
excerpt from :doc:`/scripts/base/protocols/http/main.bro` below. excerpt from :doc:`/scripts/base/protocols/http/main.bro` below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro .. btest-include:: ${DOC_ROOT}/scripting/http_main.bro
:lines: 9-11,20-22,125
Because the constant was declared with the ``&redef`` attribute, if we Because the constant was declared with the ``&redef`` attribute, if we
needed to turn this option on globally, we could do so by adding the needed to turn this option on globally, we could do so by adding the
@ -776,7 +775,7 @@ string against which it will be tested to be on the right.
In the sample above, two local variables are declared to hold our In the sample above, two local variables are declared to hold our
sample sentence and regular expression. Our regular expression in sample sentence and regular expression. Our regular expression in
this case will return true if the string contains either the word this case will return true if the string contains either the word
``quick`` or the word ``fox``. The ``if`` statement in the script uses ``quick`` or the word ``lazy``. The ``if`` statement in the script uses
embedded matching and the ``in`` operator to check for the existence embedded matching and the ``in`` operator to check for the existence
of the pattern within the string. If the statement resolves to true, of the pattern within the string. If the statement resolves to true,
:bro:id:`split` is called to break the string into separate pieces. :bro:id:`split` is called to break the string into separate pieces.
@ -825,8 +824,7 @@ example of the ``record`` data type in the earlier sections, the
:bro:type:`Conn::Info`, which corresponds to the fields logged into :bro:type:`Conn::Info`, which corresponds to the fields logged into
``conn.log``, is shown by the excerpt below. ``conn.log``, is shown by the excerpt below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro .. btest-include:: ${DOC_ROOT}/scripting/data_type_record.bro
:lines: 10-12,16-17,19,21,23,25,28,31,35,38,57,63,69,75,98,101,105,108,112,116-117,122
Looking at the structure of the definition, a new collection of data Looking at the structure of the definition, a new collection of data
types is being defined as a type called ``Info``. Since this type types is being defined as a type called ``Info``. Since this type

View file

@ -6,6 +6,7 @@ module X509;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the X.509 log.
type Info: record { type Info: record {
## Current timestamp. ## Current timestamp.
ts: time &log; ts: time &log;

View file

@ -1 +1,2 @@
@load ./main @load ./main
@load ./store

View file

@ -1,11 +1,20 @@
##! Various data structure definitions for use with Bro's communication system. ##! Various data structure definitions for use with Bro's communication system.
module BrokerComm; module Log;
export {
type Log::ID: enum {
## Dummy place-holder.
UNKNOWN
};
}
module Broker;
export { export {
## A name used to identify this endpoint to peers. ## A name used to identify this endpoint to peers.
## .. bro:see:: BrokerComm::connect BrokerComm::listen ## .. bro:see:: Broker::connect Broker::listen
const endpoint_name = "" &redef; const endpoint_name = "" &redef;
## Change communication behavior. ## Change communication behavior.
@ -32,11 +41,11 @@ export {
## Opaque communication data. ## Opaque communication data.
type Data: record { type Data: record {
d: opaque of BrokerComm::Data &optional; d: opaque of Broker::Data &optional;
}; };
## Opaque communication data. ## Opaque communication data.
type DataVector: vector of BrokerComm::Data; type DataVector: vector of Broker::Data;
## Opaque event communication data. ## Opaque event communication data.
type EventArgs: record { type EventArgs: record {
@ -49,55 +58,315 @@ export {
## Opaque communication data used as a convenient way to wrap key-value ## Opaque communication data used as a convenient way to wrap key-value
## pairs that comprise table entries. ## pairs that comprise table entries.
type TableItem : record { type TableItem : record {
key: BrokerComm::Data; key: Broker::Data;
val: BrokerComm::Data; val: Broker::Data;
}; };
## Enable use of communication.
##
## flags: used to tune the local Broker endpoint behavior.
##
## Returns: true if communication is successfully initialized.
global enable: function(flags: EndpointFlags &default = EndpointFlags()): bool;
## Changes endpoint flags originally supplied to :bro:see:`Broker::enable`.
##
## flags: the new endpoint behavior flags to use.
##
## Returns: true if flags were changed.
global set_endpoint_flags: function(flags: EndpointFlags &default = EndpointFlags()): bool;
## Allow sending messages to peers if associated with the given topic.
## This has no effect if auto publication behavior is enabled via the flags
## supplied to :bro:see:`Broker::enable` or :bro:see:`Broker::set_endpoint_flags`.
##
## topic: a topic to allow messages to be published under.
##
## Returns: true if successful.
global publish_topic: function(topic: string): bool;
## Disallow sending messages to peers if associated with the given topic.
## This has no effect if auto publication behavior is enabled via the flags
## supplied to :bro:see:`Broker::enable` or :bro:see:`Broker::set_endpoint_flags`.
##
## topic: a topic to disallow messages to be published under.
##
## Returns: true if successful.
global unpublish_topic: function(topic: string): bool;
## Listen for remote connections.
##
## p: the TCP port to listen on.
##
## a: an address string on which to accept connections, e.g.
## "127.0.0.1". An empty string refers to @p INADDR_ANY.
##
## reuse: equivalent to behavior of SO_REUSEADDR.
##
## Returns: true if the local endpoint is now listening for connections.
##
## .. bro:see:: Broker::incoming_connection_established
global listen: function(p: port, a: string &default = "", reuse: bool &default = T): bool;
## Initiate a remote connection.
##
## a: an address to connect to, e.g. "localhost" or "127.0.0.1".
##
## p: the TCP port on which the remote side is listening.
##
## retry: an interval at which to retry establishing the
## connection with the remote peer if it cannot be made initially, or
## if it ever becomes disconnected.
##
## Returns: true if it's possible to try connecting with the peer and
## it's a new peer. The actual connection may not be established
## until a later point in time.
##
## .. bro:see:: Broker::outgoing_connection_established
global connect: function(a: string, p: port, retry: interval): bool;
## Remove a remote connection.
##
## a: the address used in previous successful call to :bro:see:`Broker::connect`.
##
## p: the port used in previous successful call to :bro:see:`Broker::connect`.
##
## Returns: true if the arguments match a previously successful call to
## :bro:see:`Broker::connect`.
global disconnect: function(a: string, p: port): bool;
## Print a simple message to any interested peers. The receiver can use
## :bro:see:`Broker::print_handler` to handle messages.
##
## topic: a topic associated with the printed message.
##
## msg: the print message to send to peers.
##
## flags: tune the behavior of how the message is sent.
##
## Returns: true if the message is sent.
global send_print: function(topic: string, msg: string, flags: SendFlags &default = SendFlags()): bool;
## Register interest in all peer print messages that use a certain topic
## prefix. Use :bro:see:`Broker::print_handler` to handle received
## messages.
##
## topic_prefix: a prefix to match against remote message topics.
## e.g. an empty prefix matches everything and "a" matches
## "alice" and "amy" but not "bob".
##
## Returns: true if it's a new print subscription and it is now registered.
global subscribe_to_prints: function(topic_prefix: string): bool;
## Unregister interest in all peer print messages that use a topic prefix.
##
## topic_prefix: a prefix previously supplied to a successful call to
## :bro:see:`Broker::subscribe_to_prints`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
global unsubscribe_to_prints: function(topic_prefix: string): bool;
## Send an event to any interested peers.
##
## topic: a topic associated with the event message.
##
## args: event arguments as made by :bro:see:`Broker::event_args`.
##
## flags: tune the behavior of how the message is sent.
##
## Returns: true if the message is sent.
global send_event: function(topic: string, args: EventArgs, flags: SendFlags &default = SendFlags()): bool;
## Automatically send an event to any interested peers whenever it is
## locally dispatched (e.g. using "event my_event(...);" in a script).
##
## topic: a topic string associated with the event message.
## Peers advertise interest by registering a subscription to some
## prefix of this topic name.
##
## ev: a Bro event value.
##
## flags: tune the behavior of how the message is sent.
##
## Returns: true if automatic event sending is now enabled.
global auto_event: function(topic: string, ev: any, flags: SendFlags &default = SendFlags()): bool;
## Stop automatically sending an event to peers upon local dispatch.
##
## topic: a topic originally given to :bro:see:`Broker::auto_event`.
##
## ev: an event originally given to :bro:see:`Broker::auto_event`.
##
## Returns: true if automatic events will not occur for the topic/event
## pair.
global auto_event_stop: function(topic: string, ev: any): bool;
## Register interest in all peer event messages that use a certain topic
## prefix.
##
## topic_prefix: a prefix to match against remote message topics.
## e.g. an empty prefix matches everything and "a" matches
## "alice" and "amy" but not "bob".
##
## Returns: true if it's a new event subscription and it is now registered.
global subscribe_to_events: function(topic_prefix: string): bool;
## Unregister interest in all peer event messages that use a topic prefix.
##
## topic_prefix: a prefix previously supplied to a successful call to
## :bro:see:`Broker::subscribe_to_events`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
global unsubscribe_to_events: function(topic_prefix: string): bool;
## Enable remote logs for a given log stream.
##
## id: the log stream to enable remote logs for.
##
## flags: tune the behavior of how log entry messages are sent.
##
## Returns: true if remote logs are enabled for the stream.
global enable_remote_logs: function(id: Log::ID, flags: SendFlags &default = SendFlags()): bool;
## Disable remote logs for a given log stream.
##
## id: the log stream to disable remote logs for.
##
## Returns: true if remote logs are disabled for the stream.
global disable_remote_logs: function(id: Log::ID): bool;
## Check if remote logs are enabled for a given log stream.
##
## id: the log stream to check.
##
## Returns: true if remote logs are enabled for the given stream.
global remote_logs_enabled: function(id: Log::ID): bool;
## Register interest in all peer log messages that use a certain topic
## prefix. Logs are implicitly sent with topic "bro/log/<stream-name>" and
## the receiving side processes them through the logging framework as usual.
##
## topic_prefix: a prefix to match against remote message topics.
## e.g. an empty prefix matches everything and "a" matches
## "alice" and "amy" but not "bob".
##
## Returns: true if it's a new log subscription and it is now registered.
global subscribe_to_logs: function(topic_prefix: string): bool;
## Unregister interest in all peer log messages that use a topic prefix.
## Logs are implicitly sent with topic "bro/log/<stream-name>" and the
## receiving side processes them through the logging framework as usual.
##
## topic_prefix: a prefix previously supplied to a successful call to
## :bro:see:`Broker::subscribe_to_logs`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
global unsubscribe_to_logs: function(topic_prefix: string): bool;
} }
module BrokerStore; @load base/bif/comm.bif
@load base/bif/messaging.bif
export { module Broker;
## Whether a data store query could be completed or not. function enable(flags: EndpointFlags &default = EndpointFlags()) : bool
type QueryStatus: enum { {
SUCCESS, return __enable(flags);
FAILURE, }
};
## An expiry time for a key-value pair inserted in to a data store. function set_endpoint_flags(flags: EndpointFlags &default = EndpointFlags()): bool
type ExpiryTime: record { {
## Absolute point in time at which to expire the entry. return __set_endpoint_flags(flags);
absolute: time &optional; }
## A point in time relative to the last modification time at which
## to expire the entry. New modifications will delay the expiration.
since_last_modification: interval &optional;
};
## The result of a data store query. function publish_topic(topic: string): bool
type QueryResult: record { {
## Whether the query completed or not. return __publish_topic(topic);
status: BrokerStore::QueryStatus; }
## The result of the query. Certain queries may use a particular
## data type (e.g. querying store size always returns a count, but
## a lookup may return various data types).
result: BrokerComm::Data;
};
## Options to tune the SQLite storage backend. function unpublish_topic(topic: string): bool
type SQLiteOptions: record { {
## File system path of the database. return __unpublish_topic(topic);
path: string &default = "store.sqlite"; }
};
## Options to tune the RocksDB storage backend. function listen(p: port, a: string &default = "", reuse: bool &default = T): bool
type RocksDBOptions: record { {
## File system path of the database. return __listen(p, a, reuse);
path: string &default = "store.rocksdb"; }
};
function connect(a: string, p: port, retry: interval): bool
{
return __connect(a, p, retry);
}
function disconnect(a: string, p: port): bool
{
return __disconnect(a, p);
}
function send_print(topic: string, msg: string, flags: SendFlags &default = SendFlags()): bool
{
return __send_print(topic, msg, flags);
}
function subscribe_to_prints(topic_prefix: string): bool
{
return __subscribe_to_prints(topic_prefix);
}
function unsubscribe_to_prints(topic_prefix: string): bool
{
return __unsubscribe_to_prints(topic_prefix);
}
function send_event(topic: string, args: EventArgs, flags: SendFlags &default = SendFlags()): bool
{
return __event(topic, args, flags);
}
function auto_event(topic: string, ev: any, flags: SendFlags &default = SendFlags()): bool
{
return __auto_event(topic, ev, flags);
}
function auto_event_stop(topic: string, ev: any): bool
{
return __auto_event_stop(topic, ev);
}
function subscribe_to_events(topic_prefix: string): bool
{
return __subscribe_to_events(topic_prefix);
}
function unsubscribe_to_events(topic_prefix: string): bool
{
return __unsubscribe_to_events(topic_prefix);
}
function enable_remote_logs(id: Log::ID, flags: SendFlags &default = SendFlags()): bool
{
return __enable_remote_logs(id, flags);
}
function disable_remote_logs(id: Log::ID): bool
{
return __disable_remote_logs(id);
}
function remote_logs_enabled(id: Log::ID): bool
{
return __remote_logs_enabled(id);
}
function subscribe_to_logs(topic_prefix: string): bool
{
return __subscribe_to_logs(topic_prefix);
}
function unsubscribe_to_logs(topic_prefix: string): bool
{
return __unsubscribe_to_logs(topic_prefix);
}
## Options to tune the particular storage backends.
type BackendOptions: record {
sqlite: SQLiteOptions &default = SQLiteOptions();
rocksdb: RocksDBOptions &default = RocksDBOptions();
};
}

File diff suppressed because it is too large Load diff

View file

@ -68,7 +68,7 @@ export {
## Events raised by TimeMachine instances and handled by workers. ## Events raised by TimeMachine instances and handled by workers.
const tm2worker_events = /EMPTY/ &redef; const tm2worker_events = /EMPTY/ &redef;
## Events sent by the control host (i.e. BroControl) when dynamically ## Events sent by the control host (i.e., BroControl) when dynamically
## connecting to a running instance to update settings or request data. ## connecting to a running instance to update settings or request data.
const control_events = Control::controller_events &redef; const control_events = Control::controller_events &redef;

View file

@ -2,7 +2,7 @@
# MPEG v3 audio # MPEG v3 audio
signature file-mpeg-audio { signature file-mpeg-audio {
file-mime "audio/mpeg", 20 file-mime "audio/mpeg", 20
file-magic /^\xff[\xe2\xe3\xf2\xf3\xf6\xf7\xfa\xfb\xfc\xfd]/ file-magic /^(ID3|\xff[\xe2\xe3\xf2\xf3\xf6\xf7\xfa\xfb\xfc\xfd])/
} }
# MPEG v4 audio # MPEG v4 audio

View file

@ -9,53 +9,53 @@ signature file-plaintext {
signature file-json { signature file-json {
file-mime "text/json", 1 file-mime "text/json", 1
file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*\{[\x0d\x0a[:blank:]]*(["][^"]{1,}["]|[a-zA-Z][a-zA-Z0-9\\_]*)[\x0d\x0a[:blank:]]*:[\x0d\x0a[:blank:]]*(["]|\[|\{|[0-9]|true|false)/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x0d\x0a[:blank:]]*\{[\x0d\x0a[:blank:]]*(["][^"]{1,}["]|[a-zA-Z][a-zA-Z0-9\\_]*)[\x0d\x0a[:blank:]]*:[\x0d\x0a[:blank:]]*(["]|\[|\{|[0-9]|true|false)/
} }
signature file-json2 { signature file-json2 {
file-mime "text/json", 1 file-mime "text/json", 1
file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*\[[\x0d\x0a[:blank:]]*(((["][^"]{1,}["]|[0-9]{1,}(\.[0-9]{1,})?|true|false)[\x0d\x0a[:blank:]]*,)|\{|\[)[\x0d\x0a[:blank:]]*/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x0d\x0a[:blank:]]*\[[\x0d\x0a[:blank:]]*(((["][^"]{1,}["]|[0-9]{1,}(\.[0-9]{1,})?|true|false)[\x0d\x0a[:blank:]]*,)|\{|\[)[\x0d\x0a[:blank:]]*/
} }
# Match empty JSON documents. # Match empty JSON documents.
signature file-json3 { signature file-json3 {
file-mime "text/json", 0 file-mime "text/json", 0
file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*(\[\]|\{\})[\x0d\x0a[:blank:]]*$/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x0d\x0a[:blank:]]*(\[\]|\{\})[\x0d\x0a[:blank:]]*$/
} }
signature file-xml { signature file-xml {
file-mime "application/xml", 10 file-mime "application/xml", 10
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<\?xml / file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*\x00?<\x00?\?\x00?x\x00?m\x00?l\x00? \x00?/
} }
signature file-xhtml { signature file-xhtml {
file-mime "text/html", 100 file-mime "text/html", 100
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<(![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]|[hH][tT][mM][lL]|[mM][eE][tT][aA] {1,}[hH][tT][tT][pP]-[eE][qQ][uU][iI][vV])/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<(![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]|[hH][tT][mM][lL]|[mM][eE][tT][aA] {1,}[hH][tT][tT][pP]-[eE][qQ][uU][iI][vV])/
} }
signature file-html { signature file-html {
file-mime "text/html", 49 file-mime "text/html", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]/
} }
signature file-html2 { signature file-html2 {
file-mime "text/html", 20 file-mime "text/html", 20
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<([hH][eE][aA][dD]|[hH][tT][mM][lL]|[tT][iI][tT][lL][eE]|[bB][oO][dD][yY])/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<([hH][eE][aA][dD]|[hH][tT][mM][lL]|[tT][iI][tT][lL][eE]|[bB][oO][dD][yY])/
} }
signature file-rss { signature file-rss {
file-mime "text/rss", 90 file-mime "text/rss", 90
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[rR][sS][sS]/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[rR][sS][sS]/
} }
signature file-atom { signature file-atom {
file-mime "text/atom", 100 file-mime "text/atom", 100
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<([rR][sS][sS][^>]*xmlns:atom|[fF][eE][eE][dD][^>]*xmlns=["']?http:\/\/www.w3.org\/2005\/Atom["']?)/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<([rR][sS][sS][^>]*xmlns:atom|[fF][eE][eE][dD][^>]*xmlns=["']?http:\/\/www.w3.org\/2005\/Atom["']?)/
} }
signature file-soap { signature file-soap {
file-mime "application/soap+xml", 49 file-mime "application/soap+xml", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[sS][oO][aA][pP](-[eE][nN][vV])?:[eE][nN][vV][eE][lL][oO][pP][eE]/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[sS][oO][aA][pP](-[eE][nN][vV])?:[eE][nN][vV][eE][lL][oO][pP][eE]/
} }
signature file-cross-domain-policy { signature file-cross-domain-policy {
@ -70,7 +70,7 @@ signature file-cross-domain-policy2 {
signature file-xmlrpc { signature file-xmlrpc {
file-mime "application/xml-rpc", 49 file-mime "application/xml-rpc", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][eE][tT][hH][oO][dD][rR][eE][sS][pP][oO][nN][sS][eE]>/ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][eE][tT][hH][oO][dD][rR][eE][sS][pP][oO][nN][sS][eE]>/
} }
signature file-coldfusion { signature file-coldfusion {
@ -81,7 +81,13 @@ signature file-coldfusion {
# Adobe Flash Media Manifest # Adobe Flash Media Manifest
signature file-f4m { signature file-f4m {
file-mime "application/f4m", 49 file-mime "application/f4m", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][aA][nN][iI][fF][eE][sS][tT][\x0d\x0a[:blank:]]{1,}xmlns=\"http:\/\/ns\.adobe\.com\/f4m\// file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][aA][nN][iI][fF][eE][sS][tT][\x0d\x0a[:blank:]]{1,}xmlns=\"http:\/\/ns\.adobe\.com\/f4m\//
}
# .ini style files
signature file-ini {
file-mime "text/ini", 20
file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x00\x0d\x0a[:blank:]]*\[[^\x0d\x0a]+\][[:blank:]\x00]*[\x0d\x0a]/
} }
# Microsoft LNK files # Microsoft LNK files
@ -90,6 +96,41 @@ signature file-lnk {
file-magic /^\x4C\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x10\x00\x00\x00\x46/ file-magic /^\x4C\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x10\x00\x00\x00\x46/
} }
# Microsoft Registry policies
signature file-pol {
file-mime "application/vnd.ms-pol", 49
file-magic /^PReg/
}
# Old style Windows registry file
signature file-reg {
file-mime "application/vnd.ms-reg", 49
file-magic /^REGEDIT4/
}
# Newer Windows registry file
signature file-reg-utf16 {
file-mime "application/vnd.ms-reg", 49
file-magic /^\xFF\xFEW\x00i\x00n\x00d\x00o\x00w\x00s\x00 \x00R\x00e\x00g\x00i\x00s\x00t\x00r\x00y\x00 \x00E\x00d\x00i\x00t\x00o\x00r\x00 \x00V\x00e\x00r\x00s\x00i\x00o\x00n\x00 \x005\x00\.\x000\x000/
}
# Microsoft Registry format (typically DESKTOP.DAT)
signature file-regf {
file-mime "application vnd.ms-regf", 49
file-magic /^\x72\x65\x67\x66/
}
# Microsoft Outlook PST files
signature file-pst {
file-mime "application/vnd.ms-outlook", 49
file-magic /!BDN......[\x0e\x0f\x15\x17][\x00-\x02]/
}
signature file-afpinfo {
file-mime "application/vnd.apple-afpinfo"
file-magic /^AFP/
}
signature file-jar { signature file-jar {
file-mime "application/java-archive", 100 file-mime "application/java-archive", 100
file-magic /^PK\x03\x04.{1,200}\x14\x00..META-INF\/MANIFEST\.MF/ file-magic /^PK\x03\x04.{1,200}\x14\x00..META-INF\/MANIFEST\.MF/

View file

@ -95,9 +95,20 @@ export {
## connection record should go here to give context to the data. ## connection record should go here to give context to the data.
conn: connection &optional; conn: connection &optional;
## If the data was discovered within a connection, the
## connection uid should go here to give context to the data.
## If the *conn* field is provided, this will be automatically
## filled out.
uid: string &optional;
## If the data was discovered within a file, the file record ## If the data was discovered within a file, the file record
## should go here to provide context to the data. ## should go here to provide context to the data.
f: fa_file &optional; f: fa_file &optional;
## If the data was discovered within a file, the file uid should
## go here to provide context to the data. If the *f* field is
## provided, this will be automatically filled out.
fuid: string &optional;
}; };
## Record used for the logging framework representing a positive ## Record used for the logging framework representing a positive
@ -116,6 +127,7 @@ export {
## If a file was associated with this intelligence hit, ## If a file was associated with this intelligence hit,
## this is the uid for the file. ## this is the uid for the file.
fuid: string &log &optional; fuid: string &log &optional;
## A mime type if the intelligence hit is related to a file. ## A mime type if the intelligence hit is related to a file.
## If the $f field is provided this will be automatically filled ## If the $f field is provided this will be automatically filled
## out. ## out.
@ -296,15 +308,14 @@ event Intel::match(s: Seen, items: set[Item]) &priority=5
if ( s?$f ) if ( s?$f )
{ {
s$fuid = s$f$id;
if ( s$f?$conns && |s$f$conns| == 1 ) if ( s$f?$conns && |s$f$conns| == 1 )
{ {
for ( cid in s$f$conns ) for ( cid in s$f$conns )
s$conn = s$f$conns[cid]; s$conn = s$f$conns[cid];
} }
if ( ! info?$fuid )
info$fuid = s$f$id;
if ( ! info?$file_mime_type && s$f?$info && s$f$info?$mime_type ) if ( ! info?$file_mime_type && s$f?$info && s$f$info?$mime_type )
info$file_mime_type = s$f$info$mime_type; info$file_mime_type = s$f$info$mime_type;
@ -312,12 +323,18 @@ event Intel::match(s: Seen, items: set[Item]) &priority=5
info$file_desc = Files::describe(s$f); info$file_desc = Files::describe(s$f);
} }
if ( s?$fuid )
info$fuid = s$fuid;
if ( s?$conn ) if ( s?$conn )
{ {
info$uid = s$conn$uid; s$uid = s$conn$uid;
info$id = s$conn$id; info$id = s$conn$id;
} }
if ( s?$uid )
info$uid = s$uid;
for ( item in items ) for ( item in items )
{ {
add info$sources[item$meta$source]; add info$sources[item$meta$source];

View file

@ -23,20 +23,20 @@ export {
# ### Generic functions and events. # ### Generic functions and events.
# ### # ###
# Activates a plugin. ## Activates a plugin.
# ##
# p: The plugin to acticate. ## p: The plugin to acticate.
# ##
# priority: The higher the priority, the earlier this plugin will be checked ## priority: The higher the priority, the earlier this plugin will be checked
# whether it supports an operation, relative to other plugins. ## whether it supports an operation, relative to other plugins.
global activate: function(p: PluginState, priority: int); global activate: function(p: PluginState, priority: int);
# Event that is used to initialize plugins. Place all plugin initialization ## Event that is used to initialize plugins. Place all plugin initialization
# related functionality in this event. ## related functionality in this event.
global NetControl::init: event(); global NetControl::init: event();
# Event that is raised once all plugins activated in ``NetControl::init`` have finished ## Event that is raised once all plugins activated in ``NetControl::init``
# their initialization. ## have finished their initialization.
global NetControl::init_done: event(); global NetControl::init_done: event();
# ### # ###
@ -109,21 +109,24 @@ export {
## ##
## r: The rule to install. ## r: The rule to install.
## ##
## Returns: If succesful, returns an ID string unique to the rule that can later ## Returns: If succesful, returns an ID string unique to the rule that can
## be used to refer to it. If unsuccessful, returns an empty string. The ID is also ## later be used to refer to it. If unsuccessful, returns an empty
## assigned to ``r$id``. Note that "successful" means "a plugin knew how to handle ## string. The ID is also assigned to ``r$id``. Note that
## the rule", it doesn't necessarily mean that it was indeed successfully put in ## "successful" means "a plugin knew how to handle the rule", it
## place, because that might happen asynchronously and thus fail only later. ## doesn't necessarily mean that it was indeed successfully put in
## place, because that might happen asynchronously and thus fail
## only later.
global add_rule: function(r: Rule) : string; global add_rule: function(r: Rule) : string;
## Removes a rule. ## Removes a rule.
## ##
## id: The rule to remove, specified as the ID returned by :bro:id:`add_rule` . ## id: The rule to remove, specified as the ID returned by :bro:id:`NetControl::add_rule`.
## ##
## Returns: True if succesful, the relevant plugin indicated that it knew how ## Returns: True if succesful, the relevant plugin indicated that it knew
## to handle the removal. Note that again "success" means the plugin accepted the ## how to handle the removal. Note that again "success" means the
## removal. They might still fail to put it into effect, as that might happen ## plugin accepted the removal. They might still fail to put it
## asynchronously and thus go wrong at that point. ## into effect, as that might happen asynchronously and thus go
## wrong at that point.
global remove_rule: function(id: string) : bool; global remove_rule: function(id: string) : bool;
## Searches all rules affecting a certain IP address. ## Searches all rules affecting a certain IP address.

View file

@ -227,7 +227,7 @@ function acld_add_rule_fun(p: PluginState, r: Rule) : bool
if ( ar$command == "" ) if ( ar$command == "" )
return F; return F;
BrokerComm::event(p$acld_config$acld_topic, BrokerComm::event_args(acld_add_rule, p$acld_id, r, ar)); Broker::send_event(p$acld_config$acld_topic, Broker::event_args(acld_add_rule, p$acld_id, r, ar));
return T; return T;
} }
@ -242,18 +242,18 @@ function acld_remove_rule_fun(p: PluginState, r: Rule) : bool
else else
return F; return F;
BrokerComm::event(p$acld_config$acld_topic, BrokerComm::event_args(acld_remove_rule, p$acld_id, r, ar)); Broker::send_event(p$acld_config$acld_topic, Broker::event_args(acld_remove_rule, p$acld_id, r, ar));
return T; return T;
} }
function acld_init(p: PluginState) function acld_init(p: PluginState)
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect(cat(p$acld_config$acld_host), p$acld_config$acld_port, 1sec); Broker::connect(cat(p$acld_config$acld_host), p$acld_config$acld_port, 1sec);
BrokerComm::subscribe_to_events(p$acld_config$acld_topic); Broker::subscribe_to_events(p$acld_config$acld_topic);
} }
event BrokerComm::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string) event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
{ {
if ( [peer_port, peer_address] !in netcontrol_acld_peers ) if ( [peer_port, peer_address] !in netcontrol_acld_peers )
# ok, this one was none of ours... # ok, this one was none of ours...

View file

@ -96,24 +96,24 @@ function broker_name(p: PluginState) : string
function broker_add_rule_fun(p: PluginState, r: Rule) : bool function broker_add_rule_fun(p: PluginState, r: Rule) : bool
{ {
BrokerComm::event(p$broker_topic, BrokerComm::event_args(broker_add_rule, p$broker_id, r)); Broker::send_event(p$broker_topic, Broker::event_args(broker_add_rule, p$broker_id, r));
return T; return T;
} }
function broker_remove_rule_fun(p: PluginState, r: Rule) : bool function broker_remove_rule_fun(p: PluginState, r: Rule) : bool
{ {
BrokerComm::event(p$broker_topic, BrokerComm::event_args(broker_remove_rule, p$broker_id, r)); Broker::send_event(p$broker_topic, Broker::event_args(broker_remove_rule, p$broker_id, r));
return T; return T;
} }
function broker_init(p: PluginState) function broker_init(p: PluginState)
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect(cat(p$broker_host), p$broker_port, 1sec); Broker::connect(cat(p$broker_host), p$broker_port, 1sec);
BrokerComm::subscribe_to_events(p$broker_topic); Broker::subscribe_to_events(p$broker_topic);
} }
event BrokerComm::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string) event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
{ {
if ( [peer_port, peer_address] !in netcontrol_broker_peers ) if ( [peer_port, peer_address] !in netcontrol_broker_peers )
return; return;

View file

@ -14,7 +14,7 @@ export {
MAC, ##< Activity involving a MAC address. MAC, ##< Activity involving a MAC address.
}; };
## Type of a :bro:id:`Flow` for defining a flow. ## Type for defining a flow.
type Flow: record { type Flow: record {
src_h: subnet &optional; ##< The source IP address/subnet. src_h: subnet &optional; ##< The source IP address/subnet.
src_p: port &optional; ##< The source port number. src_p: port &optional; ##< The source port number.
@ -27,10 +27,10 @@ export {
## Type defining the enity an :bro:id:`Rule` is operating on. ## Type defining the enity an :bro:id:`Rule` is operating on.
type Entity: record { type Entity: record {
ty: EntityType; ##< Type of entity. ty: EntityType; ##< Type of entity.
conn: conn_id &optional; ##< Used with :bro:id:`CONNECTION` . conn: conn_id &optional; ##< Used with :bro:enum:`NetControl::CONNECTION`.
flow: Flow &optional; ##< Used with :bro:id:`FLOW` . flow: Flow &optional; ##< Used with :bro:enum:`NetControl::FLOW`.
ip: subnet &optional; ##< Used with bro:id:`ADDRESS`; can specifiy a CIDR subnet. ip: subnet &optional; ##< Used with :bro:enum:`NetControl::ADDRESS` to specifiy a CIDR subnet.
mac: string &optional; ##< Used with :bro:id:`MAC`. mac: string &optional; ##< Used with :bro:enum:`NetControl::MAC`.
}; };
## Target of :bro:id:`Rule` action. ## Target of :bro:id:`Rule` action.
@ -68,7 +68,7 @@ export {
WHITELIST, WHITELIST,
}; };
## Type of a :bro:id:`FlowMod` for defining a flow modification action. ## Type for defining a flow modification action.
type FlowMod: record { type FlowMod: record {
src_h: addr &optional; ##< The source IP address. src_h: addr &optional; ##< The source IP address.
src_p: count &optional; ##< The source port number. src_p: count &optional; ##< The source port number.
@ -90,8 +90,8 @@ export {
priority: int &default=default_priority; ##< Priority if multiple rules match an entity (larger value is higher priority). priority: int &default=default_priority; ##< Priority if multiple rules match an entity (larger value is higher priority).
location: string &optional; ##< Optional string describing where/what installed the rule. location: string &optional; ##< Optional string describing where/what installed the rule.
out_port: count &optional; ##< Argument for bro:id:`REDIRECT` rules. out_port: count &optional; ##< Argument for :bro:enum:`NetControl::REDIRECT` rules.
mod: FlowMod &optional; ##< Argument for :bro:id:`MODIFY` rules. mod: FlowMod &optional; ##< Argument for :bro:enum:`NetControl::MODIFY` rules.
id: string &default=""; ##< Internally determined unique ID for this rule. Will be set when added. id: string &default=""; ##< Internally determined unique ID for this rule. Will be set when added.
cid: count &default=0; ##< Internally determined unique numeric ID for this rule. Set when added. cid: count &default=0; ##< Internally determined unique numeric ID for this rule. Set when added.

View file

@ -44,6 +44,7 @@ export {
ACTION_ALARM, ACTION_ALARM,
}; };
## Type that represents a set of actions.
type ActionSet: set[Notice::Action]; type ActionSet: set[Notice::Action];
## The notice framework is able to do automatic notice suppression by ## The notice framework is able to do automatic notice suppression by
@ -52,6 +53,7 @@ export {
## suppression. ## suppression.
const default_suppression_interval = 1hrs &redef; const default_suppression_interval = 1hrs &redef;
## The record type that is used for representing and logging notices.
type Info: record { type Info: record {
## An absolute time indicating when the notice occurred, ## An absolute time indicating when the notice occurred,
## defaults to the current network time. ## defaults to the current network time.

View file

@ -47,26 +47,26 @@ function broker_describe(state: ControllerState): string
function broker_flow_mod_fun(state: ControllerState, match: ofp_match, flow_mod: OpenFlow::ofp_flow_mod): bool function broker_flow_mod_fun(state: ControllerState, match: ofp_match, flow_mod: OpenFlow::ofp_flow_mod): bool
{ {
BrokerComm::event(state$broker_topic, BrokerComm::event_args(broker_flow_mod, state$_name, state$broker_dpid, match, flow_mod)); Broker::send_event(state$broker_topic, Broker::event_args(broker_flow_mod, state$_name, state$broker_dpid, match, flow_mod));
return T; return T;
} }
function broker_flow_clear_fun(state: OpenFlow::ControllerState): bool function broker_flow_clear_fun(state: OpenFlow::ControllerState): bool
{ {
BrokerComm::event(state$broker_topic, BrokerComm::event_args(broker_flow_clear, state$_name, state$broker_dpid)); Broker::send_event(state$broker_topic, Broker::event_args(broker_flow_clear, state$_name, state$broker_dpid));
return T; return T;
} }
function broker_init(state: OpenFlow::ControllerState) function broker_init(state: OpenFlow::ControllerState)
{ {
BrokerComm::enable(); Broker::enable();
BrokerComm::connect(cat(state$broker_host), state$broker_port, 1sec); Broker::connect(cat(state$broker_host), state$broker_port, 1sec);
BrokerComm::subscribe_to_events(state$broker_topic); # openflow success and failure events are directly sent back via the other plugin via broker. Broker::subscribe_to_events(state$broker_topic); # openflow success and failure events are directly sent back via the other plugin via broker.
} }
event BrokerComm::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string) event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
{ {
if ( [peer_port, peer_address] !in broker_peers ) if ( [peer_port, peer_address] !in broker_peers )
# ok, this one was none of ours... # ok, this one was none of ours...

View file

@ -18,7 +18,7 @@ export {
event net_stats_update(last_stat: NetStats) event net_stats_update(last_stat: NetStats)
{ {
local ns = net_stats(); local ns = get_net_stats();
local new_dropped = ns$pkts_dropped - last_stat$pkts_dropped; local new_dropped = ns$pkts_dropped - last_stat$pkts_dropped;
if ( new_dropped > 0 ) if ( new_dropped > 0 )
{ {
@ -38,5 +38,5 @@ event bro_init()
# Since this currently only calculates packet drops, let's skip the stats # Since this currently only calculates packet drops, let's skip the stats
# collection if reading traces. # collection if reading traces.
if ( ! reading_traces() ) if ( ! reading_traces() )
schedule stats_collection_interval { net_stats_update(net_stats()) }; schedule stats_collection_interval { net_stats_update(get_net_stats()) };
} }

View file

@ -5,7 +5,8 @@
module SumStats; module SumStats;
export { export {
## The various calculations are all defined as plugins. ## Type to represent the calculations that are available. The calculations
## are all defined as plugins.
type Calculation: enum { type Calculation: enum {
PLACEHOLDER PLACEHOLDER
}; };
@ -39,6 +40,7 @@ export {
str: string &optional; str: string &optional;
}; };
## Represents a reducer.
type Reducer: record { type Reducer: record {
## Observation stream identifier for the reducer ## Observation stream identifier for the reducer
## to attach to. ## to attach to.
@ -56,7 +58,7 @@ export {
normalize_key: function(key: SumStats::Key): Key &optional; normalize_key: function(key: SumStats::Key): Key &optional;
}; };
## Value calculated for an observation stream fed into a reducer. ## Result calculated for an observation stream fed into a reducer.
## Most of the fields are added by plugins. ## Most of the fields are added by plugins.
type ResultVal: record { type ResultVal: record {
## The time when the first observation was added to ## The time when the first observation was added to
@ -71,14 +73,15 @@ export {
num: count &default=0; num: count &default=0;
}; };
## Type to store results for multiple reducers. ## Type to store a table of results for multiple reducers indexed by
## observation stream identifier.
type Result: table[string] of ResultVal; type Result: table[string] of ResultVal;
## Type to store a table of sumstats results indexed by keys. ## Type to store a table of sumstats results indexed by keys.
type ResultTable: table[Key] of Result; type ResultTable: table[Key] of Result;
## SumStats represent an aggregation of reducers along with ## Represents a SumStat, which consists of an aggregation of reducers along
## mechanisms to handle various situations like the epoch ending ## with mechanisms to handle various situations like the epoch ending
## or thresholds being crossed. ## or thresholds being crossed.
## ##
## It's best to not access any global state outside ## It's best to not access any global state outside
@ -101,21 +104,28 @@ export {
## The reducers for the SumStat. ## The reducers for the SumStat.
reducers: set[Reducer]; reducers: set[Reducer];
## Provide a function to calculate a value from the ## A function that will be called once for each observation in order
## :bro:see:`SumStats::Result` structure which will be used ## to calculate a value from the :bro:see:`SumStats::Result` structure
## for thresholding. ## which will be used for thresholding.
## This is required if a *threshold* value is given. ## This function is required if a *threshold* value or
## a *threshold_series* is given.
threshold_val: function(key: SumStats::Key, result: SumStats::Result): double &optional; threshold_val: function(key: SumStats::Key, result: SumStats::Result): double &optional;
## The threshold value for calling the ## The threshold value for calling the *threshold_crossed* callback.
## *threshold_crossed* callback. ## If you need more than one threshold value, then use
## *threshold_series* instead.
threshold: double &optional; threshold: double &optional;
## A series of thresholds for calling the ## A series of thresholds for calling the *threshold_crossed*
## *threshold_crossed* callback. ## callback. These thresholds must be listed in ascending order,
## because a threshold is not checked until the preceding one has
## been crossed.
threshold_series: vector of double &optional; threshold_series: vector of double &optional;
## A callback that is called when a threshold is crossed. ## A callback that is called when a threshold is crossed.
## A threshold is crossed when the value returned from *threshold_val*
## is greater than or equal to the threshold value, but only the first
## time this happens within an epoch.
threshold_crossed: function(key: SumStats::Key, result: SumStats::Result) &optional; threshold_crossed: function(key: SumStats::Key, result: SumStats::Result) &optional;
## A callback that receives each of the results at the ## A callback that receives each of the results at the
@ -130,6 +140,8 @@ export {
}; };
## Create a summary statistic. ## Create a summary statistic.
##
## ss: The SumStat to create.
global create: function(ss: SumStats::SumStat); global create: function(ss: SumStats::SumStat);
## Add data into an observation stream. This should be ## Add data into an observation stream. This should be

View file

@ -1,3 +1,5 @@
##! Calculate the average.
@load ../main @load ../main
module SumStats; module SumStats;
@ -9,7 +11,7 @@ export {
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this calculates the average of all values. ## For numeric data, this is the average of all values.
average: double &optional; average: double &optional;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Calculate the number of unique values (using the HyperLogLog algorithm).
@load base/frameworks/sumstats @load base/frameworks/sumstats
module SumStats; module SumStats;

View file

@ -1,3 +1,5 @@
##! Keep the last X observations.
@load base/frameworks/sumstats @load base/frameworks/sumstats
@load base/utils/queue @load base/utils/queue

View file

@ -1,3 +1,5 @@
##! Find the maximum value.
@load ../main @load ../main
module SumStats; module SumStats;
@ -9,7 +11,7 @@ export {
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this tracks the maximum value given. ## For numeric data, this tracks the maximum value.
max: double &optional; max: double &optional;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Find the minimum value.
@load ../main @load ../main
module SumStats; module SumStats;
@ -9,7 +11,7 @@ export {
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this tracks the minimum value given. ## For numeric data, this tracks the minimum value.
min: double &optional; min: double &optional;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Keep a random sample of values.
@load base/frameworks/sumstats/main @load base/frameworks/sumstats/main
module SumStats; module SumStats;
@ -10,7 +12,7 @@ export {
}; };
redef record Reducer += { redef record Reducer += {
## A number of sample Observations to collect. ## The number of sample Observations to collect.
num_samples: count &default=0; num_samples: count &default=0;
}; };

View file

@ -1,3 +1,5 @@
##! Calculate the standard deviation.
@load ./variance @load ./variance
@load ../main @load ../main
@ -5,7 +7,7 @@ module SumStats;
export { export {
redef enum Calculation += { redef enum Calculation += {
## Find the standard deviation of the values. ## Calculate the standard deviation of the values.
STD_DEV STD_DEV
}; };

View file

@ -1,11 +1,13 @@
##! Calculate the sum.
@load ../main @load ../main
module SumStats; module SumStats;
export { export {
redef enum Calculation += { redef enum Calculation += {
## Sums the values given. For string values, ## Calculate the sum of the values. For string values,
## this will be the number of strings given. ## this will be the number of strings.
SUM SUM
}; };

View file

@ -1,3 +1,5 @@
##! Keep the top-k (i.e., most frequently occurring) observations.
@load base/frameworks/sumstats @load base/frameworks/sumstats
module SumStats; module SumStats;
@ -9,10 +11,13 @@ export {
}; };
redef enum Calculation += { redef enum Calculation += {
## Keep a top-k list of values.
TOPK TOPK
}; };
redef record ResultVal += { redef record ResultVal += {
## A handle which can be passed to some built-in functions to get
## the top-k results.
topk: opaque of topk &optional; topk: opaque of topk &optional;
}; };

View file

@ -1,10 +1,12 @@
##! Calculate the number of unique values.
@load ../main @load ../main
module SumStats; module SumStats;
export { export {
redef record Reducer += { redef record Reducer += {
## Maximum number of unique elements to store. ## Maximum number of unique values to store.
unique_max: count &optional; unique_max: count &optional;
}; };
@ -15,7 +17,7 @@ export {
redef record ResultVal += { redef record ResultVal += {
## If cardinality is being tracked, the number of unique ## If cardinality is being tracked, the number of unique
## items is tracked here. ## values is tracked here.
unique: count &default=0; unique: count &default=0;
}; };
} }

View file

@ -1,3 +1,5 @@
##! Calculate the variance.
@load ./average @load ./average
@load ../main @load ../main
@ -5,12 +7,12 @@ module SumStats;
export { export {
redef enum Calculation += { redef enum Calculation += {
## Find the variance of the values. ## Calculate the variance of the values.
VARIANCE VARIANCE
}; };
redef record ResultVal += { redef record ResultVal += {
## For numeric data, this calculates the variance. ## For numeric data, this is the variance.
variance: double &optional; variance: double &optional;
}; };
} }

View file

@ -474,14 +474,38 @@ type NetStats: record {
bytes_recvd: count &default=0; ##< Bytes received by Bro. bytes_recvd: count &default=0; ##< Bytes received by Bro.
}; };
## Statistics about Bro's resource consumption. type ConnStats: record {
total_conns: count; ##<
current_conns: count; ##<
current_conns_extern: count; ##<
sess_current_conns: count; ##<
num_packets: count;
num_fragments: count;
max_fragments: count;
num_tcp_conns: count; ##< Current number of TCP connections in memory.
max_tcp_conns: count; ##< Maximum number of concurrent TCP connections so far.
cumulative_tcp_conns: count; ##< Total number of TCP connections so far.
num_udp_conns: count; ##< Current number of UDP flows in memory.
max_udp_conns: count; ##< Maximum number of concurrent UDP flows so far.
cumulative_udp_conns: count; ##< Total number of UDP flows so far.
num_icmp_conns: count; ##< Current number of ICMP flows in memory.
max_icmp_conns: count; ##< Maximum number of concurrent ICMP flows so far.
cumulative_icmp_conns: count; ##< Total number of ICMP flows so far.
killed_by_inactivity: count;
};
## Statistics about Bro's process.
## ##
## .. bro:see:: resource_usage ## .. bro:see:: get_proc_stats
## ##
## .. note:: All process-level values refer to Bro's main process only, not to ## .. note:: All process-level values refer to Bro's main process only, not to
## the child process it spawns for doing communication. ## the child process it spawns for doing communication.
type bro_resources: record { type ProcStats: record {
version: string; ##< Bro version string.
debug: bool; ##< True if compiled with --enable-debug. debug: bool; ##< True if compiled with --enable-debug.
start_time: time; ##< Start time of process. start_time: time; ##< Start time of process.
real_time: interval; ##< Elapsed real time since Bro started running. real_time: interval; ##< Elapsed real time since Bro started running.
@ -494,46 +518,85 @@ type bro_resources: record {
blocking_input: count; ##< Blocking input operations. blocking_input: count; ##< Blocking input operations.
blocking_output: count; ##< Blocking output operations. blocking_output: count; ##< Blocking output operations.
num_context: count; ##< Number of involuntary context switches. num_context: count; ##< Number of involuntary context switches.
};
num_TCP_conns: count; ##< Current number of TCP connections in memory. type EventStats: record {
num_UDP_conns: count; ##< Current number of UDP flows in memory. queued: count; ##< Total number of events queued so far.
num_ICMP_conns: count; ##< Current number of ICMP flows in memory. dispatched: count; ##< Total number of events dispatched so far.
num_fragments: count; ##< Current number of fragments pending reassembly.
num_packets: count; ##< Total number of packets processed to date.
num_timers: count; ##< Current number of pending timers.
num_events_queued: count; ##< Total number of events queued so far.
num_events_dispatched: count; ##< Total number of events dispatched so far.
max_TCP_conns: count; ##< Maximum number of concurrent TCP connections so far.
max_UDP_conns: count; ##< Maximum number of concurrent UDP connections so far.
max_ICMP_conns: count; ##< Maximum number of concurrent ICMP connections so far.
max_fragments: count; ##< Maximum number of concurrently buffered fragments so far.
max_timers: count; ##< Maximum number of concurrent timers pending so far.
}; };
## Summary statistics of all regular expression matchers. ## Summary statistics of all regular expression matchers.
## ##
## .. bro:see:: get_reassembler_stats
type ReassemblerStats: record {
file_size: count; ##< Byte size of File reassembly tracking.
frag_size: count; ##< Byte size of Fragment reassembly tracking.
tcp_size: count; ##< Byte size of TCP reassembly tracking.
unknown_size: count; ##< Byte size of reassembly tracking for unknown purposes.
};
## Statistics of all regular expression matchers.
##
## .. bro:see:: get_matcher_stats ## .. bro:see:: get_matcher_stats
type matcher_stats: record { type MatcherStats: record {
matchers: count; ##< Number of distinct RE matchers. matchers: count; ##< Number of distinct RE matchers.
nfa_states: count; ##< Number of NFA states across all matchers.
dfa_states: count; ##< Number of DFA states across all matchers. dfa_states: count; ##< Number of DFA states across all matchers.
computed: count; ##< Number of computed DFA state transitions. computed: count; ##< Number of computed DFA state transitions.
mem: count; ##< Number of bytes used by DFA states. mem: count; ##< Number of bytes used by DFA states.
hits: count; ##< Number of cache hits. hits: count; ##< Number of cache hits.
misses: count; ##< Number of cache misses. misses: count; ##< Number of cache misses.
avg_nfa_states: count; ##< Average number of NFA states across all matchers. };
## Statistics of timers.
##
## .. bro:see:: get_timer_stats
type TimerStats: record {
current: count; ##< Current number of pending timers.
max: count; ##< Maximum number of concurrent timers pending so far.
cumulative: count; ##< Cumulative number of timers scheduled.
};
## Statistics of file analysis.
##
## .. bro:see:: get_file_analysis_stats
type FileAnalysisStats: record {
current: count; ##< Current number of files being analyzed.
max: count; ##< Maximum number of concurrent files so far.
cumulative: count; ##< Cumulative number of files analyzed.
};
## Statistics related to Bro's active use of DNS. These numbers are
## about Bro performing DNS queries on it's own, not traffic
## being seen.
##
## .. bro:see:: get_dns_stats
type DNSStats: record {
requests: count; ##< Number of DNS requests made
successful: count; ##< Number of successful DNS replies.
failed: count; ##< Number of DNS reply failures.
pending: count; ##< Current pending queries.
cached_hosts: count; ##< Number of cached hosts.
cached_addresses: count; ##< Number of cached addresses.
}; };
## Statistics about number of gaps in TCP connections. ## Statistics about number of gaps in TCP connections.
## ##
## .. bro:see:: gap_report get_gap_summary ## .. bro:see:: get_gap_stats
type gap_info: record { type GapStats: record {
ack_events: count; ##< How many ack events *could* have had gaps. ack_events: count; ##< How many ack events *could* have had gaps.
ack_bytes: count; ##< How many bytes those covered. ack_bytes: count; ##< How many bytes those covered.
gap_events: count; ##< How many *did* have gaps. gap_events: count; ##< How many *did* have gaps.
gap_bytes: count; ##< How many bytes were missing in the gaps. gap_bytes: count; ##< How many bytes were missing in the gaps.
}; };
## Statistics about threads.
##
## .. bro:see:: get_thread_stats
type ThreadStats: record {
num_threads: count;
};
## Deprecated. ## Deprecated.
## ##
## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere ## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere
@ -793,71 +856,6 @@ type entropy_test_result: record {
serial_correlation: double; ##< Serial correlation coefficient. serial_correlation: double; ##< Serial correlation coefficient.
}; };
# Prototypes of Bro built-in functions.
@load base/bif/strings.bif
@load base/bif/bro.bif
@load base/bif/reporter.bif
## Deprecated. This is superseded by the new logging framework.
global log_file_name: function(tag: string): string &redef;
## Deprecated. This is superseded by the new logging framework.
global open_log_file: function(tag: string): file &redef;
## Specifies a directory for Bro to store its persistent state. All globals can
## be declared persistent via the :bro:attr:`&persistent` attribute.
const state_dir = ".state" &redef;
## Length of the delays inserted when storing state incrementally. To avoid
## dropping packets when serializing larger volumes of persistent state to
## disk, Bro interleaves the operation with continued packet processing.
const state_write_delay = 0.01 secs &redef;
global done_with_network = F;
event net_done(t: time) { done_with_network = T; }
function log_file_name(tag: string): string
{
local suffix = getenv("BRO_LOG_SUFFIX") == "" ? "log" : getenv("BRO_LOG_SUFFIX");
return fmt("%s.%s", tag, suffix);
}
function open_log_file(tag: string): file
{
return open(log_file_name(tag));
}
## Internal function.
function add_interface(iold: string, inew: string): string
{
if ( iold == "" )
return inew;
else
return fmt("%s %s", iold, inew);
}
## Network interfaces to listen on. Use ``redef interfaces += "eth0"`` to
## extend.
global interfaces = "" &add_func = add_interface;
## Internal function.
function add_signature_file(sold: string, snew: string): string
{
if ( sold == "" )
return snew;
else
return cat(sold, " ", snew);
}
## Signature files to read. Use ``redef signature_files += "foo.sig"`` to
## extend. Signature files added this way will be searched relative to
## ``BROPATH``. Using the ``@load-sigs`` directive instead is preferred
## since that can search paths relative to the current script.
global signature_files = "" &add_func = add_signature_file;
## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``.
const passive_fingerprint_file = "base/misc/p0f.fp" &redef;
# TCP values for :bro:see:`endpoint` *state* field. # TCP values for :bro:see:`endpoint` *state* field.
# todo:: these should go into an enum to make them autodoc'able. # todo:: these should go into an enum to make them autodoc'able.
const TCP_INACTIVE = 0; ##< Endpoint is still inactive. const TCP_INACTIVE = 0; ##< Endpoint is still inactive.
@ -1768,6 +1766,71 @@ type gtp_delete_pdp_ctx_response_elements: record {
ext: gtp_private_extension &optional; ext: gtp_private_extension &optional;
}; };
# Prototypes of Bro built-in functions.
@load base/bif/strings.bif
@load base/bif/bro.bif
@load base/bif/reporter.bif
## Deprecated. This is superseded by the new logging framework.
global log_file_name: function(tag: string): string &redef;
## Deprecated. This is superseded by the new logging framework.
global open_log_file: function(tag: string): file &redef;
## Specifies a directory for Bro to store its persistent state. All globals can
## be declared persistent via the :bro:attr:`&persistent` attribute.
const state_dir = ".state" &redef;
## Length of the delays inserted when storing state incrementally. To avoid
## dropping packets when serializing larger volumes of persistent state to
## disk, Bro interleaves the operation with continued packet processing.
const state_write_delay = 0.01 secs &redef;
global done_with_network = F;
event net_done(t: time) { done_with_network = T; }
function log_file_name(tag: string): string
{
local suffix = getenv("BRO_LOG_SUFFIX") == "" ? "log" : getenv("BRO_LOG_SUFFIX");
return fmt("%s.%s", tag, suffix);
}
function open_log_file(tag: string): file
{
return open(log_file_name(tag));
}
## Internal function.
function add_interface(iold: string, inew: string): string
{
if ( iold == "" )
return inew;
else
return fmt("%s %s", iold, inew);
}
## Network interfaces to listen on. Use ``redef interfaces += "eth0"`` to
## extend.
global interfaces = "" &add_func = add_interface;
## Internal function.
function add_signature_file(sold: string, snew: string): string
{
if ( sold == "" )
return snew;
else
return cat(sold, " ", snew);
}
## Signature files to read. Use ``redef signature_files += "foo.sig"`` to
## extend. Signature files added this way will be searched relative to
## ``BROPATH``. Using the ``@load-sigs`` directive instead is preferred
## since that can search paths relative to the current script.
global signature_files = "" &add_func = add_signature_file;
## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``.
const passive_fingerprint_file = "base/misc/p0f.fp" &redef;
## Definition of "secondary filters". A secondary filter is a BPF filter given ## Definition of "secondary filters". A secondary filter is a BPF filter given
## as index in this table. For each such filter, the corresponding event is ## as index in this table. For each such filter, the corresponding event is
## raised for all matching packets. ## raised for all matching packets.
@ -3435,23 +3498,17 @@ global pkt_profile_file: file &redef;
## .. bro:see:: load_sample ## .. bro:see:: load_sample
global load_sample_freq = 20 &redef; global load_sample_freq = 20 &redef;
## Rate at which to generate :bro:see:`gap_report` events assessing to what
## degree the measurement process appears to exhibit loss.
##
## .. bro:see:: gap_report
const gap_report_freq = 1.0 sec &redef;
## Whether to attempt to automatically detect SYN/FIN/RST-filtered trace ## Whether to attempt to automatically detect SYN/FIN/RST-filtered trace
## and not report missing segments for such connections. ## and not report missing segments for such connections.
## If this is enabled, then missing data at the end of connections may not ## If this is enabled, then missing data at the end of connections may not
## be reported via :bro:see:`content_gap`. ## be reported via :bro:see:`content_gap`.
const detect_filtered_trace = F &redef; const detect_filtered_trace = F &redef;
## Whether we want :bro:see:`content_gap` and :bro:see:`gap_report` for partial ## Whether we want :bro:see:`content_gap` and :bro:see:`get_gap_summary` for partial
## connections. A connection is partial if it is missing a full handshake. Note ## connections. A connection is partial if it is missing a full handshake. Note
## that gap reports for partial connections might not be reliable. ## that gap reports for partial connections might not be reliable.
## ##
## .. bro:see:: content_gap gap_report partial_connection ## .. bro:see:: content_gap get_gap_summary partial_connection
const report_gaps_for_partial = F &redef; const report_gaps_for_partial = F &redef;
## Flag to prevent Bro from exiting automatically when input is exhausted. ## Flag to prevent Bro from exiting automatically when input is exhausted.

View file

@ -37,8 +37,10 @@
@load base/frameworks/reporter @load base/frameworks/reporter
@load base/frameworks/sumstats @load base/frameworks/sumstats
@load base/frameworks/tunnels @load base/frameworks/tunnels
@ifdef ( Broker::enable )
@load base/frameworks/openflow @load base/frameworks/openflow
@load base/frameworks/netcontrol @load base/frameworks/netcontrol
@endif
@load base/protocols/conn @load base/protocols/conn
@load base/protocols/dhcp @load base/protocols/dhcp
@ -46,6 +48,7 @@
@load base/protocols/dns @load base/protocols/dns
@load base/protocols/ftp @load base/protocols/ftp
@load base/protocols/http @load base/protocols/http
@load base/protocols/imap
@load base/protocols/irc @load base/protocols/irc
@load base/protocols/krb @load base/protocols/krb
@load base/protocols/modbus @load base/protocols/modbus
@ -53,6 +56,7 @@
@load base/protocols/pop3 @load base/protocols/pop3
@load base/protocols/radius @load base/protocols/radius
@load base/protocols/rdp @load base/protocols/rdp
@load base/protocols/rfb
@load base/protocols/sip @load base/protocols/sip
@load base/protocols/snmp @load base/protocols/snmp
@load base/protocols/smtp @load base/protocols/smtp
@ -61,6 +65,7 @@
@load base/protocols/ssl @load base/protocols/ssl
@load base/protocols/syslog @load base/protocols/syslog
@load base/protocols/tunnels @load base/protocols/tunnels
@load base/protocols/xmpp
@load base/files/pe @load base/files/pe
@load base/files/hash @load base/files/hash

View file

@ -26,7 +26,7 @@ event ChecksumOffloading::check()
if ( done ) if ( done )
return; return;
local pkts_recvd = net_stats()$pkts_recvd; local pkts_recvd = get_net_stats()$pkts_recvd;
local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0; local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0;
local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0; local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0;
local bad_udp_checksum_pct = (pkts_recvd != 0) ? (bad_udp_checksums*1.0 / pkts_recvd*1.0) : 0; local bad_udp_checksum_pct = (pkts_recvd != 0) ? (bad_udp_checksums*1.0 / pkts_recvd*1.0) : 0;

View file

@ -26,6 +26,7 @@ export {
[49] = "DHCID", [99] = "SPF", [100] = "DINFO", [101] = "UID", [49] = "DHCID", [99] = "SPF", [100] = "DINFO", [101] = "UID",
[102] = "GID", [103] = "UNSPEC", [249] = "TKEY", [250] = "TSIG", [102] = "GID", [103] = "UNSPEC", [249] = "TKEY", [250] = "TSIG",
[251] = "IXFR", [252] = "AXFR", [253] = "MAILB", [254] = "MAILA", [251] = "IXFR", [252] = "AXFR", [253] = "MAILB", [254] = "MAILA",
[257] = "CAA",
[32768] = "TA", [32769] = "DLV", [32768] = "TA", [32769] = "DLV",
[ANY] = "*", [ANY] = "*",
} &default = function(n: count): string { return fmt("query-%d", n); }; } &default = function(n: count): string { return fmt("query-%d", n); };

View file

@ -52,7 +52,7 @@ export {
## The Recursion Available bit in a response message indicates ## The Recursion Available bit in a response message indicates
## that the name server supports recursive queries. ## that the name server supports recursive queries.
RA: bool &log &default=F; RA: bool &log &default=F;
## A reserved field that is currently supposed to be zero in all ## A reserved field that is usually zero in
## queries and responses. ## queries and responses.
Z: count &log &default=0; Z: count &log &default=0;
## The set of resource descriptions in the query answer. ## The set of resource descriptions in the query answer.

View file

@ -21,6 +21,7 @@ export {
## not. ## not.
const default_capture_password = F &redef; const default_capture_password = F &redef;
## The record type which contains the fields of the HTTP log.
type Info: record { type Info: record {
## Timestamp for when the request happened. ## Timestamp for when the request happened.
ts: time &log; ts: time &log;

View file

@ -0,0 +1,5 @@
Support for the Internet Message Access Protocol (IMAP).
Note that currently the IMAP analyzer only supports analyzing IMAP sessions
until they do or do not switch to TLS using StartTLS. Hence, we do not get
mails from IMAP sessions, only X509 certificates.

View file

@ -0,0 +1,2 @@
@load ./main

View file

@ -0,0 +1,11 @@
module IMAP;
const ports = { 143/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_IMAP, ports);
}

View file

@ -0,0 +1 @@
Support for Remote FrameBuffer analysis. This includes all VNC servers.

View file

@ -0,0 +1,3 @@
# Generated by binpac_quickstart
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,12 @@
signature dpd_rfb_server {
ip-proto == tcp
payload /^RFB/
requires-reverse-signature dpd_rfb_client
enable "rfb"
}
signature dpd_rfb_client {
ip-proto == tcp
payload /^RFB/
tcp-state originator
}

View file

@ -0,0 +1,165 @@
module RFB;
export {
redef enum Log::ID += { LOG };
## The record type which contains the fields of the RFB log.
type Info: record {
## Timestamp for when the event happened.
ts: time &log;
## Unique ID for the connection.
uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log;
## Major version of the client.
client_major_version: string &log &optional;
## Minor version of the client.
client_minor_version: string &log &optional;
## Major version of the server.
server_major_version: string &log &optional;
## Major version of the client.
server_minor_version: string &log &optional;
## Identifier of authentication method used.
authentication_method: string &log &optional;
## Whether or not authentication was succesful.
auth: bool &log &optional;
## Whether the client has an exclusive or a shared session.
share_flag: bool &log &optional;
## Name of the screen that is being shared.
desktop_name: string &log &optional;
## Width of the screen that is being shared.
width: count &log &optional;
## Height of the screen that is being shared.
height: count &log &optional;
## Internally used value to determine if this connection
## has already been logged.
done: bool &default=F;
};
global log_rfb: event(rec: Info);
}
function friendly_auth_name(auth: count): string
{
switch (auth) {
case 0:
return "Invalid";
case 1:
return "None";
case 2:
return "VNC";
case 16:
return "Tight";
case 17:
return "Ultra";
case 18:
return "TLS";
case 19:
return "VeNCrypt";
case 20:
return "GTK-VNC SASL";
case 21:
return "MD5 hash authentication";
case 22:
return "Colin Dean xvp";
case 30:
return "Apple Remote Desktop";
}
return "RealVNC";
}
redef record connection += {
rfb: Info &optional;
};
event bro_init() &priority=5
{
Log::create_stream(RFB::LOG, [$columns=Info, $ev=log_rfb, $path="rfb"]);
}
function write_log(c:connection)
{
local state = c$rfb;
if ( state$done )
{
return;
}
Log::write(RFB::LOG, c$rfb);
c$rfb$done = T;
}
function set_session(c: connection)
{
if ( ! c?$rfb )
{
local info: Info;
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
c$rfb = info;
}
}
event rfb_event(c: connection) &priority=5
{
set_session(c);
}
event rfb_client_version(c: connection, major_version: string, minor_version: string) &priority=5
{
set_session(c);
c$rfb$client_major_version = major_version;
c$rfb$client_minor_version = minor_version;
}
event rfb_server_version(c: connection, major_version: string, minor_version: string) &priority=5
{
set_session(c);
c$rfb$server_major_version = major_version;
c$rfb$server_minor_version = minor_version;
}
event rfb_authentication_type(c: connection, authtype: count) &priority=5
{
set_session(c);
c$rfb$authentication_method = friendly_auth_name(authtype);
}
event rfb_server_parameters(c: connection, name: string, width: count, height: count) &priority=5
{
set_session(c);
c$rfb$desktop_name = name;
c$rfb$width = width;
c$rfb$height = height;
}
event rfb_server_parameters(c: connection, name: string, width: count, height: count) &priority=-5
{
write_log(c);
}
event rfb_auth_result(c: connection, result: bool) &priority=5
{
c$rfb$auth = !result;
}
event rfb_share_flag(c: connection, flag: bool) &priority=5
{
c$rfb$share_flag = flag;
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$rfb )
{
write_log(c);
}
}

View file

@ -10,6 +10,7 @@ module SIP;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SIP log.
type Info: record { type Info: record {
## Timestamp for when the request happened. ## Timestamp for when the request happened.
ts: time &log; ts: time &log;

View file

@ -7,6 +7,7 @@ module SMTP;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SMTP log.
type Info: record { type Info: record {
## Time when the message was first seen. ## Time when the message was first seen.
ts: time &log; ts: time &log;

View file

@ -6,6 +6,7 @@ module SOCKS;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SOCKS log.
type Info: record { type Info: record {
## Time when the proxy connection was first detected. ## Time when the proxy connection was first detected.
ts: time &log; ts: time &log;

View file

@ -8,6 +8,7 @@ export {
## The SSH protocol logging stream identifier. ## The SSH protocol logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SSH log.
type Info: record { type Info: record {
## Time when the SSH connection began. ## Time when the SSH connection began.
ts: time &log; ts: time &log;

View file

@ -8,6 +8,7 @@ module SSL;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the SSL log.
type Info: record { type Info: record {
## Time when the SSL connection was first detected. ## Time when the SSL connection was first detected.
ts: time &log; ts: time &log;

View file

@ -8,6 +8,7 @@ module Syslog;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the fields of the syslog log.
type Info: record { type Info: record {
## Timestamp when the syslog message was seen. ## Timestamp when the syslog message was seen.
ts: time &log; ts: time &log;

View file

@ -0,0 +1,5 @@
Support for the Extensible Messaging and Presence Protocol (XMPP).
Note that currently the XMPP analyzer only supports analyzing XMPP sessions
until they do or do not switch to TLS using StartTLS. Hence, we do not get
actual chat information from XMPP sessions, only X509 certificates.

View file

@ -0,0 +1,3 @@
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,5 @@
signature dpd_xmpp {
ip-proto == tcp
payload /^(<\?xml[^?>]*\?>)?[\n\r ]*<stream:stream [^>]*xmlns='jabber:/
enable "xmpp"
}

View file

@ -0,0 +1,11 @@
module XMPP;
const ports = { 5222/tcp, 5269/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_XMPP, ports);
}

View file

@ -22,30 +22,10 @@ event Control::id_value_request(id: string)
event Control::peer_status_request() event Control::peer_status_request()
{ {
local status = "";
for ( p in Communication::nodes )
{
local peer = Communication::nodes[p];
if ( ! peer$connected )
next;
local res = resource_usage();
status += fmt("%.6f peer=%s host=%s events_in=%s events_out=%s ops_in=%s ops_out=%s bytes_in=? bytes_out=?\n",
network_time(),
peer$peer$descr, peer$host,
res$num_events_queued, res$num_events_dispatched,
res$blocking_input, res$blocking_output);
}
event Control::peer_status_response(status);
} }
event Control::net_stats_request() event Control::net_stats_request()
{ {
local ns = net_stats();
local reply = fmt("%.6f recvd=%d dropped=%d link=%d\n", network_time(),
ns$pkts_recvd, ns$pkts_dropped, ns$pkts_link);
event Control::net_stats_response(reply);
} }
event Control::configuration_update_request() event Control::configuration_update_request()

View file

@ -0,0 +1,20 @@
module Files;
export {
redef record Files::Info += {
## The information density of the contents of the file,
## expressed as a number of bits per character.
entropy: double &log &optional;
};
}
event file_new(f: fa_file)
{
Files::add_analyzer(f, Files::ANALYZER_ENTROPY);
}
event file_entropy(f: fa_file, ent: entropy_test_result)
{
f$info$entropy = ent$entropy;
}

View file

@ -20,6 +20,7 @@ event ssl_established(c: connection)
if ( c$ssl$cert_chain[0]$x509?$certificate && c$ssl$cert_chain[0]$x509$certificate?$cn ) if ( c$ssl$cert_chain[0]$x509?$certificate && c$ssl$cert_chain[0]$x509$certificate?$cn )
Intel::seen([$indicator=c$ssl$cert_chain[0]$x509$certificate$cn, Intel::seen([$indicator=c$ssl$cert_chain[0]$x509$certificate$cn,
$indicator_type=Intel::DOMAIN, $indicator_type=Intel::DOMAIN,
$fuid=c$ssl$cert_chain_fuids[0],
$conn=c, $conn=c,
$where=X509::IN_CERT]); $where=X509::IN_CERT]);
} }

View file

@ -26,3 +26,14 @@ event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certifi
$where=X509::IN_CERT]); $where=X509::IN_CERT]);
} }
} }
event file_hash(f: fa_file, kind: string, hash: string)
{
if ( ! f?$info || ! f$info?$x509 || kind != "sha1" )
return;
Intel::seen([$indicator=hash,
$indicator_type=Intel::CERT_HASH,
$f=f,
$where=X509::IN_CERT]);
}

View file

@ -56,7 +56,7 @@ event CaptureLoss::take_measurement(last_ts: time, last_acks: count, last_gaps:
} }
local now = network_time(); local now = network_time();
local g = get_gap_summary(); local g = get_gap_stats();
local acks = g$ack_events - last_acks; local acks = g$ack_events - last_acks;
local gaps = g$gap_events - last_gaps; local gaps = g$gap_events - last_gaps;
local pct_lost = (acks == 0) ? 0.0 : (100 * (1.0 * gaps) / (1.0 * acks)); local pct_lost = (acks == 0) ? 0.0 : (100 * (1.0 * gaps) / (1.0 * acks));

View file

@ -1,6 +1,4 @@
##! Log memory/packet/lag statistics. Differs from ##! Log memory/packet/lag statistics.
##! :doc:`/scripts/policy/misc/profiling.bro` in that this
##! is lighter-weight (much less info, and less load to generate).
@load base/frameworks/notice @load base/frameworks/notice
@ -10,7 +8,7 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## How often stats are reported. ## How often stats are reported.
const stats_report_interval = 1min &redef; const report_interval = 5min &redef;
type Info: record { type Info: record {
## Timestamp for the measurement. ## Timestamp for the measurement.
@ -21,27 +19,63 @@ export {
mem: count &log; mem: count &log;
## Number of packets processed since the last stats interval. ## Number of packets processed since the last stats interval.
pkts_proc: count &log; pkts_proc: count &log;
## Number of events processed since the last stats interval. ## Number of bytes received since the last stats interval if
events_proc: count &log;
## Number of events that have been queued since the last stats
## interval.
events_queued: count &log;
## Lag between the wall clock and packet timestamps if reading
## live traffic.
lag: interval &log &optional;
## Number of packets received since the last stats interval if
## reading live traffic. ## reading live traffic.
pkts_recv: count &log &optional; bytes_recv: count &log;
## Number of packets dropped since the last stats interval if ## Number of packets dropped since the last stats interval if
## reading live traffic. ## reading live traffic.
pkts_dropped: count &log &optional; pkts_dropped: count &log &optional;
## Number of packets seen on the link since the last stats ## Number of packets seen on the link since the last stats
## interval if reading live traffic. ## interval if reading live traffic.
pkts_link: count &log &optional; pkts_link: count &log &optional;
## Number of bytes received since the last stats interval if ## Lag between the wall clock and packet timestamps if reading
## reading live traffic. ## live traffic.
bytes_recv: count &log &optional; pkt_lag: interval &log &optional;
## Number of events processed since the last stats interval.
events_proc: count &log;
## Number of events that have been queued since the last stats
## interval.
events_queued: count &log;
## TCP connections currently in memory.
active_tcp_conns: count &log;
## UDP connections currently in memory.
active_udp_conns: count &log;
## ICMP connections currently in memory.
active_icmp_conns: count &log;
## TCP connections seen since last stats interval.
tcp_conns: count &log;
## UDP connections seen since last stats interval.
udp_conns: count &log;
## ICMP connections seen since last stats interval.
icmp_conns: count &log;
## Number of timers scheduled since last stats interval.
timers: count &log;
## Current number of scheduled timers.
active_timers: count &log;
## Number of files seen since last stats interval.
files: count &log;
## Current number of files actively being seen.
active_files: count &log;
## Number of DNS requests seen since last stats interval.
dns_requests: count &log;
## Current number of DNS requests awaiting a reply.
active_dns_requests: count &log;
## Current size of TCP data in reassembly.
reassem_tcp_size: count &log;
## Current size of File data in reassembly.
reassem_file_size: count &log;
## Current size of packet fragment data in reassembly.
reassem_frag_size: count &log;
## Current size of unkown data in reassembly (this is only PIA buffer right now).
reassem_unknown_size: count &log;
}; };
## Event to catch stats as they are written to the logging stream. ## Event to catch stats as they are written to the logging stream.
@ -53,38 +87,69 @@ event bro_init() &priority=5
Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats, $path="stats"]); Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats, $path="stats"]);
} }
event check_stats(last_ts: time, last_ns: NetStats, last_res: bro_resources) event check_stats(then: time, last_ns: NetStats, last_cs: ConnStats, last_ps: ProcStats, last_es: EventStats, last_rs: ReassemblerStats, last_ts: TimerStats, last_fs: FileAnalysisStats, last_ds: DNSStats)
{ {
local now = current_time(); local nettime = network_time();
local ns = net_stats(); local ns = get_net_stats();
local res = resource_usage(); local cs = get_conn_stats();
local ps = get_proc_stats();
local es = get_event_stats();
local rs = get_reassembler_stats();
local ts = get_timer_stats();
local fs = get_file_analysis_stats();
local ds = get_dns_stats();
if ( bro_is_terminating() ) if ( bro_is_terminating() )
# No more stats will be written or scheduled when Bro is # No more stats will be written or scheduled when Bro is
# shutting down. # shutting down.
return; return;
local info: Info = [$ts=now, $peer=peer_description, $mem=res$mem/1000000, local info: Info = [$ts=nettime,
$pkts_proc=res$num_packets - last_res$num_packets, $peer=peer_description,
$events_proc=res$num_events_dispatched - last_res$num_events_dispatched, $mem=ps$mem/1048576,
$events_queued=res$num_events_queued - last_res$num_events_queued]; $pkts_proc=ns$pkts_recvd - last_ns$pkts_recvd,
$bytes_recv = ns$bytes_recvd - last_ns$bytes_recvd,
$active_tcp_conns=cs$num_tcp_conns,
$tcp_conns=cs$cumulative_tcp_conns - last_cs$cumulative_tcp_conns,
$active_udp_conns=cs$num_udp_conns,
$udp_conns=cs$cumulative_udp_conns - last_cs$cumulative_udp_conns,
$active_icmp_conns=cs$num_icmp_conns,
$icmp_conns=cs$cumulative_icmp_conns - last_cs$cumulative_icmp_conns,
$reassem_tcp_size=rs$tcp_size,
$reassem_file_size=rs$file_size,
$reassem_frag_size=rs$frag_size,
$reassem_unknown_size=rs$unknown_size,
$events_proc=es$dispatched - last_es$dispatched,
$events_queued=es$queued - last_es$queued,
$timers=ts$cumulative - last_ts$cumulative,
$active_timers=ts$current,
$files=fs$cumulative - last_fs$cumulative,
$active_files=fs$current,
$dns_requests=ds$requests - last_ds$requests,
$active_dns_requests=ds$pending
];
# Someone's going to have to explain what this is and add a field to the Info record.
# info$util = 100.0*((ps$user_time + ps$system_time) - (last_ps$user_time + last_ps$system_time))/(now-then);
if ( reading_live_traffic() ) if ( reading_live_traffic() )
{ {
info$lag = now - network_time(); info$pkt_lag = current_time() - nettime;
# Someone's going to have to explain what this is and add a field to the Info record.
# info$util = 100.0*((res$user_time + res$system_time) - (last_res$user_time + last_res$system_time))/(now-last_ts);
info$pkts_recv = ns$pkts_recvd - last_ns$pkts_recvd;
info$pkts_dropped = ns$pkts_dropped - last_ns$pkts_dropped; info$pkts_dropped = ns$pkts_dropped - last_ns$pkts_dropped;
info$pkts_link = ns$pkts_link - last_ns$pkts_link; info$pkts_link = ns$pkts_link - last_ns$pkts_link;
info$bytes_recv = ns$bytes_recvd - last_ns$bytes_recvd;
} }
Log::write(Stats::LOG, info); Log::write(Stats::LOG, info);
schedule stats_report_interval { check_stats(now, ns, res) }; schedule report_interval { check_stats(nettime, ns, cs, ps, es, rs, ts, fs, ds) };
} }
event bro_init() event bro_init()
{ {
schedule stats_report_interval { check_stats(current_time(), net_stats(), resource_usage()) }; schedule report_interval { check_stats(network_time(), get_net_stats(), get_conn_stats(), get_proc_stats(), get_event_stats(), get_reassembler_stats(), get_timer_stats(), get_file_analysis_stats(), get_dns_stats()) };
} }

View file

@ -29,6 +29,7 @@
@load frameworks/intel/seen/where-locations.bro @load frameworks/intel/seen/where-locations.bro
@load frameworks/intel/seen/x509.bro @load frameworks/intel/seen/x509.bro
@load frameworks/files/detect-MHR.bro @load frameworks/files/detect-MHR.bro
@load frameworks/files/entropy-test-all-files.bro
#@load frameworks/files/extract-all-files.bro #@load frameworks/files/extract-all-files.bro
@load frameworks/files/hash-all-files.bro @load frameworks/files/hash-all-files.bro
@load frameworks/packet-filter/shunt.bro @load frameworks/packet-filter/shunt.bro

View file

@ -118,6 +118,7 @@ include(BifCl)
set(BIF_SRCS set(BIF_SRCS
bro.bif bro.bif
stats.bif
event.bif event.bif
const.bif const.bif
types.bif types.bif

View file

@ -108,9 +108,9 @@ bool ConnectionTimer::DoUnserialize(UnserialInfo* info)
return true; return true;
} }
unsigned int Connection::total_connections = 0; uint64 Connection::total_connections = 0;
unsigned int Connection::current_connections = 0; uint64 Connection::current_connections = 0;
unsigned int Connection::external_connections = 0; uint64 Connection::external_connections = 0;
IMPLEMENT_SERIAL(Connection, SER_CONNECTION); IMPLEMENT_SERIAL(Connection, SER_CONNECTION);

View file

@ -220,11 +220,11 @@ public:
unsigned int MemoryAllocation() const; unsigned int MemoryAllocation() const;
unsigned int MemoryAllocationConnVal() const; unsigned int MemoryAllocationConnVal() const;
static unsigned int TotalConnections() static uint64 TotalConnections()
{ return total_connections; } { return total_connections; }
static unsigned int CurrentConnections() static uint64 CurrentConnections()
{ return current_connections; } { return current_connections; }
static unsigned int CurrentExternalConnections() static uint64 CurrentExternalConnections()
{ return external_connections; } { return external_connections; }
// Returns true if the history was already seen, false otherwise. // Returns true if the history was already seen, false otherwise.
@ -315,9 +315,9 @@ protected:
unsigned int saw_first_orig_packet:1, saw_first_resp_packet:1; unsigned int saw_first_orig_packet:1, saw_first_resp_packet:1;
// Count number of connections. // Count number of connections.
static unsigned int total_connections; static uint64 total_connections;
static unsigned int current_connections; static uint64 current_connections;
static unsigned int external_connections; static uint64 external_connections;
string history; string history;
uint32 hist_seen; uint32 hist_seen;

View file

@ -346,6 +346,7 @@ DFA_State* DFA_State_Cache::Lookup(const NFA_state_list& nfas,
++misses; ++misses;
return 0; return 0;
} }
++hits;
delete *hash; delete *hash;
*hash = 0; *hash = 0;
@ -433,19 +434,6 @@ void DFA_Machine::Dump(FILE* f)
start_state->ClearMarks(); start_state->ClearMarks();
} }
void DFA_Machine::DumpStats(FILE* f)
{
DFA_State_Cache::Stats stats;
dfa_state_cache->GetStats(&stats);
fprintf(f, "Computed dfa_states = %d; Classes = %d; Computed trans. = %d; Uncomputed trans. = %d\n",
stats.dfa_states, EC()->NumClasses(),
stats.computed, stats.uncomputed);
fprintf(f, "DFA cache hits = %d; misses = %d\n",
stats.hits, stats.misses);
}
unsigned int DFA_Machine::MemoryAllocation() const unsigned int DFA_Machine::MemoryAllocation() const
{ {
DFA_State_Cache::Stats s; DFA_State_Cache::Stats s;

View file

@ -89,10 +89,9 @@ public:
int NumEntries() const { return states.Length(); } int NumEntries() const { return states.Length(); }
struct Stats { struct Stats {
unsigned int dfa_states; // Sum of all NFA states
// Sum over all NFA states per DFA state.
unsigned int nfa_states; unsigned int nfa_states;
unsigned int dfa_states;
unsigned int computed; unsigned int computed;
unsigned int uncomputed; unsigned int uncomputed;
unsigned int mem; unsigned int mem;
@ -132,7 +131,6 @@ public:
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
void Dump(FILE* f); void Dump(FILE* f);
void DumpStats(FILE* f);
unsigned int MemoryAllocation() const; unsigned int MemoryAllocation() const;

View file

@ -66,6 +66,7 @@ Dictionary::Dictionary(dict_order ordering, int initial_size)
delete_func = 0; delete_func = 0;
tbl_next_ind = 0; tbl_next_ind = 0;
cumulative_entries = 0;
num_buckets2 = num_entries2 = max_num_entries2 = thresh_entries2 = 0; num_buckets2 = num_entries2 = max_num_entries2 = thresh_entries2 = 0;
den_thresh2 = 0; den_thresh2 = 0;
} }
@ -444,6 +445,7 @@ void* Dictionary::Insert(DictEntry* new_entry, int copy_key)
// on lists than prepending. // on lists than prepending.
chain->append(new_entry); chain->append(new_entry);
++cumulative_entries;
if ( *max_num_entries_ptr < ++*num_entries_ptr ) if ( *max_num_entries_ptr < ++*num_entries_ptr )
*max_num_entries_ptr = *num_entries_ptr; *max_num_entries_ptr = *num_entries_ptr;

View file

@ -71,6 +71,12 @@ public:
max_num_entries + max_num_entries2 : max_num_entries; max_num_entries + max_num_entries2 : max_num_entries;
} }
// Total number of entries ever.
uint64 NumCumulativeInserts() const
{
return cumulative_entries;
}
// True if the dictionary is ordered, false otherwise. // True if the dictionary is ordered, false otherwise.
int IsOrdered() const { return order != 0; } int IsOrdered() const { return order != 0; }
@ -166,6 +172,7 @@ private:
int num_buckets; int num_buckets;
int num_entries; int num_entries;
int max_num_entries; int max_num_entries;
uint64 cumulative_entries;
double den_thresh; double den_thresh;
int thresh_entries; int thresh_entries;

View file

@ -10,8 +10,8 @@
EventMgr mgr; EventMgr mgr;
int num_events_queued = 0; uint64 num_events_queued = 0;
int num_events_dispatched = 0; uint64 num_events_dispatched = 0;
Event::Event(EventHandlerPtr arg_handler, val_list* arg_args, Event::Event(EventHandlerPtr arg_handler, val_list* arg_args,
SourceID arg_src, analyzer::ID arg_aid, TimerMgr* arg_mgr, SourceID arg_src, analyzer::ID arg_aid, TimerMgr* arg_mgr,

View file

@ -72,8 +72,8 @@ protected:
Event* next_event; Event* next_event;
}; };
extern int num_events_queued; extern uint64 num_events_queued;
extern int num_events_dispatched; extern uint64 num_events_dispatched;
class EventMgr : public BroObj { class EventMgr : public BroObj {
public: public:

View file

@ -28,7 +28,7 @@ void FragTimer::Dispatch(double t, int /* is_expire */)
FragReassembler::FragReassembler(NetSessions* arg_s, FragReassembler::FragReassembler(NetSessions* arg_s,
const IP_Hdr* ip, const u_char* pkt, const IP_Hdr* ip, const u_char* pkt,
HashKey* k, double t) HashKey* k, double t)
: Reassembler(0) : Reassembler(0, REASSEM_FRAG)
{ {
s = arg_s; s = arg_s;
key = k; key = k;

View file

@ -628,10 +628,12 @@ void builtin_error(const char* msg, BroObj* arg)
} }
#include "bro.bif.func_h" #include "bro.bif.func_h"
#include "stats.bif.func_h"
#include "reporter.bif.func_h" #include "reporter.bif.func_h"
#include "strings.bif.func_h" #include "strings.bif.func_h"
#include "bro.bif.func_def" #include "bro.bif.func_def"
#include "stats.bif.func_def"
#include "reporter.bif.func_def" #include "reporter.bif.func_def"
#include "strings.bif.func_def" #include "strings.bif.func_def"
@ -640,13 +642,22 @@ void builtin_error(const char* msg, BroObj* arg)
void init_builtin_funcs() void init_builtin_funcs()
{ {
bro_resources = internal_type("bro_resources")->AsRecordType(); ProcStats = internal_type("ProcStats")->AsRecordType();
net_stats = internal_type("NetStats")->AsRecordType(); NetStats = internal_type("NetStats")->AsRecordType();
matcher_stats = internal_type("matcher_stats")->AsRecordType(); MatcherStats = internal_type("MatcherStats")->AsRecordType();
ConnStats = internal_type("ConnStats")->AsRecordType();
ReassemblerStats = internal_type("ReassemblerStats")->AsRecordType();
DNSStats = internal_type("DNSStats")->AsRecordType();
GapStats = internal_type("GapStats")->AsRecordType();
EventStats = internal_type("EventStats")->AsRecordType();
TimerStats = internal_type("TimerStats")->AsRecordType();
FileAnalysisStats = internal_type("FileAnalysisStats")->AsRecordType();
ThreadStats = internal_type("ThreadStats")->AsRecordType();
var_sizes = internal_type("var_sizes")->AsTableType(); var_sizes = internal_type("var_sizes")->AsTableType();
gap_info = internal_type("gap_info")->AsRecordType();
#include "bro.bif.func_init" #include "bro.bif.func_init"
#include "stats.bif.func_init"
#include "reporter.bif.func_init" #include "reporter.bif.func_init"
#include "strings.bif.func_init" #include "strings.bif.func_init"

View file

@ -1,5 +1,9 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
#include <sys/types.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include "IP.h" #include "IP.h"
#include "Type.h" #include "Type.h"
#include "Val.h" #include "Val.h"
@ -403,6 +407,17 @@ RecordVal* IP_Hdr::BuildPktHdrVal(RecordVal* pkt_hdr, int sindex) const
break; break;
} }
case IPPROTO_ICMPV6:
{
const struct icmp6_hdr* icmpp = (const struct icmp6_hdr*) data;
RecordVal* icmp_hdr = new RecordVal(icmp_hdr_type);
icmp_hdr->Assign(0, new Val(icmpp->icmp6_type, TYPE_COUNT));
pkt_hdr->Assign(sindex + 4, icmp_hdr);
break;
}
default: default:
{ {
// This is not a protocol we understand. // This is not a protocol we understand.

View file

@ -285,11 +285,6 @@ void NFA_Machine::Dump(FILE* f)
first_state->ClearMarks(); first_state->ClearMarks();
} }
void NFA_Machine::DumpStats(FILE* f)
{
fprintf(f, "highest NFA state ID is %d\n", nfa_state_id);
}
NFA_Machine* make_alternate(NFA_Machine* m1, NFA_Machine* m2) NFA_Machine* make_alternate(NFA_Machine* m1, NFA_Machine* m2)
{ {
if ( ! m1 ) if ( ! m1 )

Some files were not shown because too many files have changed in this diff Show more